 # Calculating parity bytes

C2/C1 CIRC encoder stages calculate 4 parity bytes for 24/28 bytes input. What are those calculations?

The definition of the 8 parity symbols is such that Hp.Vp=0 and Hq.Vq=0. Vp and Vq are 32 and 28 symbols height column vectors, respectively, arranged from C2 and C1 outputs, respectively.

Hp and Hq calculation is defined on GF (2^8) (Galois Field) by polynomial:

P(x) = x^8 + x^4 + x^3 + x^2 + 1

and a primitive element (alpha) of GF (2^8) is defined as follows:

alpha = [0 0 0 0 0 0 1 0]
(LSB is right end)

C1 and C2 are (32,28) and (28,24) (respectively) Reed-Solomon Codes over GF (2^8). Correspondant Hp and Hq parity check matrices are shown on attachment.

How do one finally gets 4 parity bytes from the initial 28 and 24? Professor Kuhn says whole field is quite large and illustrates the principle with a smaller one:

The Galois field 2^4 elements of GF (2^4) formed as the field of polynomials over GF(2) modulo

P(x) = x^4 + x + 1

is given on attachment.

This must be your lucky day You finally got me to do some work on my Proof-Of-Concept implementation of my software decoder, in order to get it in a state where I can publish it on the web without being too embarrassed:

http://ch.tudelft.nl/~sidney/projects/laser2wav/laser2wav-0.0.2.tar.gz

This code does a number of interesting things, e.g. subchannel decoding and CRC checking, audio data de-interleaving, but also (drum roll…) what you’re asking for: C1 and C2 syndrome calculation. Currently, it only does C1/C2 error detection (not correction) but that’s really a minor thing to add.

I wrote this in Python since it is very easy to use and read (even for people who have never seen it - it basically looks like a well-balanced blend of pseudocode and C).

Note that my code shows decoding, and you’re asking for encoding. The two are very very similar - if you understand the decoding process, it’s trivially easy to derive the encoding (just a small bit of work).

Now I don’t know if you’re interested in the mathematics underlying this or just to see how it’s done - but of course the code only shows the latter. If you’re interested in the math, I could explain but the amount of background you need is something like a one-semester course of university-level algebra. Algebra is a truly fascinating and beautiful topic and I think I could explain what is going on in the CD decoder, but before I would try to explain (probably by writing it down) I’d first need to establish that people are actually interested. It’s really a lot of work to explain it in easy terms, starting from e.g. high-school level math.

Regards, Sidney

Relevant from http://mathforum.org/library/drmath/view/51452.html:

Consider the equation x^3 - 2 = 0.

The polynomial on the lefthand side of the equation is irreducible as a polynomial over the rational numbers Q. Let a be the cube root of 2, one of the roots of this cubic equation. Then if we look at the set of all rational functions in indeterminate y over Q, that is, quotients of polynomials with coefficients in Q, and we substitute for y the root a, we will get a large set of real numbers. Of course we have to throw out any which have denominator zero after the substitution, but those are just those such that y^3 - 2 divides the denominator. Some of the numbers have more than one representation in this form, so we pick just one special
one.

It turns out that every number like this can be represented uniquely in the form c(2)*a^2 + c(1)*a + c(0), where the c(i)'s are rational numbers. This set forms what is called a field: it has addition, subtraction, zero, multiplication, division (except by zero), and one, which satisfy all the usual laws that rational numbers (or real numbers, or complex numbers) do: Commutative, Associative, Distributive, etc. This field is denoted Q(a), and is called an algebraic number field.

Got code, but still lost on fields of Mr. Galois… I think I’ll find my way out and finally get the best picture if I find more helpful posts like the above.

I now understand Kuhns example actually shows how different c(2)*a^2 + c(1)*a + c(0) like compositions generate our field elements.

After two headaches I thought I sould dare to ask you how to arrange those polinomials for a given modulo (can we use same example?). I can see we’ll end up with a gracious ordering for all but zero same symbols at input, just don’t know how to produce the pattern. Still I far?

Python introduction made me finally stick to a language to begin with (said world hello!). Your decoder has been the nicest example (real applicaton) for me to learn how a language works (and CD players too).

Enjoyed playing player.

Thank you. Thanks anyway… I know you’d try answer ASAP.

Maybe CDs would have implemented ultimate error control if Monsieur Galois didn’t die early 20. Now trying to evaluate error contol in terms of error probability.

Well, you’re a quick learner… This is not easy stuff. Many people would just give up when they see “1+1==0” is generally true in GF(2^n) The key to understanding fields in general (and finite fields, i.e. Galois fields, in particular) is that they are really quite simple mathematical constructs: you have a set of elements and well-defined “plus”, “minus”, “multiplication”, and “division” (except by the zero-element) operations. This means you can basically calculate in them almost the same as you would with ‘normal’ numbers.

An important factoid about Galois Fiels of a certain order is that they are all basically the same (up to re-labeling of the elements), independent of the choice of the polynomial used in construction. It is also an interesting exercise to see why reducible polynomials do not work - in order to construct the field elements, you must use an irreducible polynomial.

After two headaches I thought I sould dare to ask you how to arrange those polinomials for a given modulo (can we use same example?). I can see we’ll end up with a gracious ordering for all but zero same symbols at input, just don’t know how to produce the pattern. Still I far?

I am not quite sure what you’re asking - perhaps you need to know how the randomly-looking power/log tables for the GF were obtained?

Python introduction made me finally stick to a language to begin with (said world hello!). Your decoder has been the nicest example (real applicaton) for me to learn how a language works (and CD players too).

Yes - Python is a nice little language for this kind of thing, albeit terribly slow… You wouldn’t want to crunch an entire CD worth of zeroes and ones trough the program - well, I would actually, which is why I am also making a C implementation.

Regards, Sidney

I am not quite sure what you’re asking - perhaps you need to know how the randomly-looking power/log tables for the GF were obtained?
II.3.2 - GF(8)

Since 8 = 2^3, the prime field is GF(2) and we need to find a monic irreducible cubic polynomial over that field. Since the coefficients can only be 0 and 1, the list of irreducible candidates is easily obtained.

x3 + 1
x3 + x + 1
x3 + x2 + 1
x3 + x2 + x + 1

Now substituting 0 gives 1 in all cases, and substituting 1 will give 0 only if there are an odd number of x terms, so the irreducible cubics are just x^3 + x + 1 and x^3 + x2 + 1. Now the multiplicative group of this field is a cyclic group of order 7 and so every nonidentity element is a generator. Letting Âµ be a root of the first polynomial, we have Âµ^3 + Âµ + 1 = 0, or Âµ^3 = Âµ + 1, so the powers of Âµ are:

Âµ^1 = Âµ
Âµ^2 = Âµ^2
Âµ^3 = Âµ + 1
Âµ^4 = Âµ^2 + Âµ
Âµ^5 = Âµ^2 + Âµ + 1
Âµ^6 = Âµ^2 + 1
Âµ^7 = 1
Irreducible polynomials finding understood, I just don’t understand right from "Now the multiplicative… ".

Once irreducible polynomials determined and one of them chosen, how do we get the powerful polynomials to calculate our field elemens?

Thanks beforehand.

Once you have picked an irreducible polynomial p(x), it has (by definition) no roots (compare, for real numbers: x^2==-1). The trick here is that you simply postulate a solution (Âµ) and see where that takes you - and it takes you to a Galois Field.

This is basically the same trick that you do to get from real numbers to complex numbers: x^2 == -1 cannot be solved in reals, so you just postulate a solution (called “i”, the complex unity) and see where it takes you. Just like the introduction of complex numbers opens up a very very useful world of mathematics, so does the postulation of Âµ.

The “multiplicative group” remark takes same background to understand. A “group” is a set with one binary operation and a bunch of properties (existence of a unity element, existence of an inverse, and so on). Let q an element in a group, and let ‘@’ be the operation. Then we can write:

q^0 == e (e is the the “identity”)
q^1 == q
q^2 == qq
q^3 == q
qq
q^4 == q
qqq
… etcetera…

As it happens, the nonzero elements in and field, and their multiplication operation, form a group (don’t get confused by the terminology - there are a number of algebraic structures with names like ‘monoid’, ‘group’ ‘ring’, ‘field’, which are all basically a combination of sets and one or more operations and a bunch of properties that hold. That’s math for you).

A very nice theorem states that all non-identity elements in any group will enumerate all elements (including the identity e) when you calculate q^i (for i any integer); effectively, each of these elements is a so-called “generator”. This can be used to define multiplication within the GF as well. Any non-zero (because zero isn’t part of the multiplicative group) and non-one (because the trick doesn’t work for the multiplicative identity) element of a group can serve as a generator to define the multiplication operation. The sane choice is to just use the aenigmatic Âµ element for this.

Now you can define Âµ^0==1, Âµ^1==Âµ, Âµ^2, Âµ^3, and so on, and by reducing to p(x) you get new elements at each step; at a certain point you end up with the element ‘1’.

Now if you need to multiply 2 elements (say, (Âµ^2+Âµ+1) and (Âµ^2+Âµ), you start by looking up to which power of Âµ they corrspond (in this case, Âµ^5 and Âµ^4), and then you note that Âµ^5*Âµ^4 == Âµ^9; now since Âµ^7 ==1, we have Âµ^9 = Âµ^2 * Âµ^7 = Âµ^2 *1 = Âµ^2.

Basically, you’re taking the “logarithm” of both factors and then doing Âµ^(sum of the logarithms).

This is similar to what you can do in normal reals - consider log/exp (base e=2.71828 taking the place of Âµ):

123 * 456

log(123) = 4.81218435537241749526
log(456) = 6.12249280951438607965

4.81218435537241749526 + 6.12249280951438607965 == 10.93467716488680357491

e^10.93467716488680357491 = 56088 (bingo!)

So this is what the GF(256) functions in laser2wav do - by using “logarithms” you can calculate products (and also integer powers) very very quickly (O(1)) within a GF.

Hope this helps, otherwise you’re probably better of dropping by the library. I’m a bit rusty on algebra so I may be missing some things,

Regards, Sidney

A kind friendly explanation! Thank You^infinite. Roger everything and feels damn good. Seems my big problem is definition of powers of Âµ when excercise.

Now you can define Âµ^0==1, Âµ^1==Âµ, Âµ^2, Âµ^3, and so on, and by reducing to p(x) you get new elements at each step; at a certain point you end up with the element ‘1’.
Let’s see this step:
Letting Âµ be a root of the first polynomial, we have Âµ^3 + Âµ + 1 = 0, or Âµ^3 = Âµ + 1, so the powers of Âµ are:

Âµ^1 = Âµ
Âµ^2 = Âµ^2
Âµ^3 = Âµ + 1
Âµ^4 = Âµ^2 + Âµ
Âµ^5 = Âµ^2 + Âµ + 1
Âµ^6 = Âµ^2 + 1
Âµ^7 = 1
Seems they made P(x) = 0 and Âµ + 1 is residual when dividing P(x) by Âµ^3, am I right? What have they done to compute from Âµ^5? Isn’t there a well known procedure for this?

Thanks again.

It’s the other way around: Âµ+1 is the residual when dividing Âµ^3 by P(x).

To get Âµ^5 is not difficult once you see the trick - since you have defined Âµ to be the root of x^3 + x + 1, i.e., the solution to x^3 + x + 1 == 0, you know that Âµ^3 + Âµ + 1 = 0, and you need to use that information:

Âµ^5 = ÂµÂµ^4 = Âµ(Âµ^2+Âµ) = Âµ^3 + Âµ^2.

Since we know that Âµ^3 + Âµ + 1 = 0, we can reduce this further:

Âµ^5 = Âµ^3 + Âµ^2 = Âµ^3 + Âµ^2 - 0 = Âµ^3 + Âµ^2 - (Âµ^3 + Âµ + 1) =

Âµ^3 + Âµ^2 - Âµ^3 - Âµ - 1 = Âµ^2 - Âµ - 1 = Âµ^2 + Âµ + 1

(Note, in the last step, you use (-1) == (+1), which is true modulo 2).

Note that while all this polynomial manipulation may look awkard this kan be done really fast in a binary computer - just take a “regular” number and associate a coefficient of Âµ^i to the i’th bit. The number 0 becomes the polynomial 0, the number 1 becomes the polynomial 1, the number 2 becomes the polynomial Âµ, the number 4 becomes the polynomial Âµ^2, the number 3 is the polynomial Âµ+1, etc.

Now adding two polynomials is just taking the bitwise XOR of the two numbers, and multiplying by Âµ is just a shift-left. From this, you can construct simply multiplication and division algorithms (although the latter are /slightly/ more involved).

if you get all this, you really should look into CRCs as well (as, e.g., used in the Q subchannel, but also by Ethernet, and by tools like ZIP). It is essentialy an exercise in a ring that is defined by another generator polynomial, similar to what we’re doing here. Understand two nasty algorithms for the price of one - it’s a great deal.

Sidenote: CRC’s are defined over a “ring”, which is essentially a “field that lacks a proper division”, i.e., you can do multiplication, adding, and subtracting, but not (for all choices of non-zero elements) division. Vice-versa, a “field” is precisely a “ring” where each non-zero element has a multiplicative inverse (ForAll x: (x<>0) => Exists y: x*y==1). This is a direct consequence of the fact that a CRC polynomial need not be (and often isn’t) irreducible. E.g., CRC-16 and CRC-CCITT polynomials are divisible by (x+1) [so x==1 solves P(x)=0] but CRC-32 as used in Ethernet and ZIP is irreducible, so it is (almost incidentally, because it is not very important) a field.

Hope this helps, (skip the CRC stuff at your convenience - it’s a pet subject for me),

Sidney

… To make it crystal clear:

With normal numbers, when you want to determine the “residual of R when dividing by P”, the calculation amounts to subtracting multiples of P from R, until you can no longer do so without making the number negative. The number left is your residual.

The same goes when R and P are polynomials, except your stopping criterion is different. Here, you want to minimize the exponent of the largest term of the polynomial.

For example, if your generator poly contains Âµ^5 as a highest polynomial term, it is always possible to obtain a polynomial that doesn’t have any terms of Âµ^5 or higher by repeatedly subtracting multiples of P from R. Note that e.g. Âµ^200 * P(x) also counts as a “multiple of P(x)”, here!

Regards, Sidney

It’s the other way around: Âµ+1 is the residual when dividing Âµ^3 by P(x).
True, I myself started calling P(x) modulo when saw Kuhn do same thing. :o
With normal numbers, when you want to determine the “residual of R when dividing by P”, the calculation amounts to subtracting multiples of P from R, until you can no longer do so without making the number negative. The number left is your residual.

The same goes when R and P are polynomials, except your stopping criterion is different. Here, you want to minimize the exponent of the largest term of the polynomial.
Geez, 2nd part is new for me (would you… ), seems I urge algebra lessions. This thread turning into unforgettable class.

Geez, 2nd part is new for me (would you… ), seems I urge algebra lessions.[/quote]

I’m not exactly sure what is to explain - why a higher order polynomial is considered “bigger” than a lower-order one? I have only got a handwaving explanation for that I’m afraid. Would you be content with a “that’s just how it is” This thread turning into unforgettable class.

Hehe It’s sad that only 1 student turns up for class Seriously, my background is really software engineering but I did some algebra back in college - we are rapidly approaching the point where I can no longer answer your questions - if you like to understand this stuff well, and have it properly explained I would suggest you pick up an introductory text on algebra.

Regards, Sidney

Answering a PM from IpseDixit. For the benefit of posterity (and boring people to hell), let me answer this in the public forum.

Do you mean the division, in a Galois Field, of the element (Âµ^3), by the element (Âµ^3+Âµ+1) ? (Your use of the letter “x” here is a bit confusing - it suggests you may be talking about a polynomial in a general field).

If yes, you should realize that every nonzero element in a field has a “multiplicative inverse”.

First, let’s define the “neutral element for multiplication”. In a field, it is the unique element for which the following holds:

x * n = n * x = x

(you can think of this element n simply as “1”). The “1” element is unique and always exists in a field.

Now in a field, for every element x except zero, there exists an element y such that:

x * y = y * x = n

The element y is called the “multiplicative inverse” or “reciprocal” of x.

now when you and I are asked to do a “division”:

A = z/x

We are really asked to calculate A such that

x*A == z

Now let y (again) be the inverse of x; we can multiply both sides of the equation by y:

yxA == y*z

… and since y*x == n:

nA == yz

… and since n * x = x:

A = y*z

The bottom line is this: dividing by x is the same as multiplying with the multiplicative inverse of x.

We already know how to multiply; the question that remains is to determine the multiplicative inverse of a field element such as (Âµ^3+Âµ+1).

Remember that Âµ^i generates the entire set of nonzero field elements (Âµ being a generator)? And remember that both Âµ^0 and Âµ^(some number) are “1”.

The “some number” is equal to the number of elements in the field, minus one (this follows from the fact that it generates all nonzero elements). Lets call this cycle length L.

Now imagine a “cycle” of all nonzero elements: 1 -> Âµ -> Âµ^2 -> m^3 -> … -> 1

Each arrow in the cycle represents multiplication by Âµ.

A very central insight is this: we can UNDO the operation of “multiplication by Âµ” be making one step in the ‘wrong’ direction - but this is exactly equivalent to making (L-1) steps in the right direction, since we’re on an L-step cycle!

Similarly, multiplication by Âµ^i of a given element is exactly making i steps on the cycle; UNDOing the multiplication by Âµ^i is equal to multiplying by Âµ^(L-i).

Now the crucial last step: asking to find the multiplicative inverse of Âµ^i in a GF is exactly equal to asking: which element shall take me to Âµ^0 (==1!) upon multiplication? And when you like at it in this simple way, you will see it has to be Âµ^(L-i).

So, in essence, dividing in a GF is multiplying with an inverse, and finding the inverse is taking the power of an element to a known integer exponent (L-i). So to summarize:

Let a and b be field elements.

To calculate a/b:

(1) make sure b is not zero (in case b = 0, the question a*b==x we’re trying to find has either zero or many solutions - no good, most of the time!)

(2) Find the multiplicative inverse of b, as follows:

(2.1) Lookup b as a power-of-Âµ in a lookup table; say, b = Âµ^i

(2.2) Calculate the multiplicative inverse of Âµ which is easy: it is simply Âµ^(L-i) where L is the number of field elements minus one. Let’s call this Âµ^j.

(3) Multiply a by Âµ^j. This is easily done if a = 0 (the answer is 0). Otherwise, use the logarithm table once again: a * Âµ^j = Âµ^(log(a)) * Âµ^j = Âµ^(log(a)+(L-log(b))), or:

a/b = Âµ ^ (L+log(a)-log(b))

(Print that in a big font and hang it on your wall!)

… So now you may see why we’re keeping the “log” and “power” tables for doing GF(256) calculations in laser2wav: it is essential to hop between the “normal” polynomial (aÂµ^3 + bÂµ^2 + … + z) and power (Âµ^i) representations; they are EQUAL (in a mathematical sense), but the one is nice for adding and subtracting, and the other is nice for multiplying, dividing, and calculating powers.

Regards, Sidney

Answering a PM from IpseDixit. For the benefit of posterity (and boring people to hell), let me answer this in the public forum.
You know? You’re right, let’s discuss every detail in public so nothing has to be explained again (ppl. no pain, ppl. no gain). Seems the detail I don’t get to ask properly from beginning is, once given or determined a rationally irreducible modulo polynomial (like P(x)=x^3+x+1 for GF(8)), how to find a very first primitive element (not Âµ^0=1, Âµ^1=Âµ or Âµ^(“L”)=1) as our first practical generator (example did such misterious step(s) to state Âµ^3=Âµ+1), to start computing the rest of cyclic group by maybe using substitution and the properties you’ve been describing, so we finally get a handy representation (of multiplicative group) for our “L” elements. I better not be missing a thing, and hopefuly being clear enough now.

I just can’t see a clear way to apply ‘your’ properties when we only know P(x), Âµ as “postulated root” and that Âµ^0=1, Âµ^1=Âµ and Âµ^(“L”)=1. Maybe there’s just nothing well known or established methodology and we have to use ‘brute force’ or trial and error according the case (not every root need be primitive and some polynomials work easier or better than others), fingers crossed to not need trying too much (I saw that for several examples). Maybe the so called binary operation is what’s behind friendly computer method and I should be comfortable with that.

Do you mean the division, in a Galois Field, of the element (Âµ^3), by the element (Âµ^3+Âµ+1) ? (Your use of the letter “x” here is a bit confusing - it suggests you may be talking about a polynomial in a general field).

Thanks for correcting me.

Ok, well, any element in a finite field, except the “0” and “1” element, is a generator for the multiplicative subgroup that consists of the nonzero elements of the field and its multiplicative operation.

Since we have defined Âµ to be a solution for P(x)==0, and a field element, and we know that Âµ<>0 and Âµ<>1 (in the first case, P(x) would be divisible by x, in the second case, P(x) would be divisible by (x-1), countering the irreducibility), we know Âµ must be a generator.

There is then nothing mysterious about stating Âµ^3=Âµ+1: Âµ is defined as a root to P(x), so we have

P(Âµ) == Âµ^3 + Âµ + 1 == 0

==> Âµ = -Âµ^3-1 = Âµ^3+1

I just can’t see a clear way to apply ‘your’ properties when we only know P(x), Âµ as “postulated root” and that Âµ^0=1, Âµ^1=Âµ and Âµ^(“L”)=1. Maybe there’s just nothing well known or established methodology and we have to use ‘brute force’ or trial and error according the case (not every root need be primitive and some polynomials work easier or better than others), fingers crossed to not need trying too much (I saw that for several examples).

No, just take Âµ, it’s guaranteed to be a primitive root. Same thing for (Âµ+1) for example, but that would only be a complication compared to just picking Âµ.

Maybe the so called binary operation is what’s behind friendly computer method and I should be comfortable with that.

There is a rather deep connection between base-n numbers as we know them and polynomials. In fact you can think of normal numbers such as “123” as a polynomial 1x^2+2x+3, by substituting the base (e.g. 10) for x.

[[ Aside: This way of looking at a number representation has yielded very deep insights, such a how to quickly multiply very large numbers using the fast fourier transform. A very nice algorithm, unfortunately it has nothing to do with CD optics so let’s not go there ]]

Binary computers do everything in base-2, so there is a very natural way to express polynomials modulo 2 (such as used in GF(2^n)) as well. It’s really quite nice how it all fits together.

Regards, Sidney

~reddish
When you posted that code, it was my lucky day too. Thank you very much for writing this CD channel bit decoder. I must say I am extremely impressed with your knowledge of the encoding algorithms and math behind them.
How are you with DVDs? I may be part of a start-up that wants to write DVDs in a raster fashion (for speed reasons). I have been given the job of turning a DVD image file (main data) into a channel bit file (0010000100000100100001).
I figure I am not the first person to need this code for DVDs. Can you tell me if there is any place I can get the source code for a DVD encoder/decoder that does everything the hardware does?
So far, I have only looked at ECMA-267. I do not have Book A or anything else at this point. Can you recommend any books or links that may help me understand how to add extra data (like Frame ID, IED and CPR_MAI), scrambling (I don’t understand the Feedback Shift Register) and adding of the ECC bytes? I do not understand how to implement RS(208,192,17). Right now, I think I understand how to do the 8 to 16 bit modulation (with the 4 states to keep the DC component down), but something tells me, even that might be harder than I think.
For now, I do not have to worry about the Burst Cutting Area (the human visible writing and zebra codes near the middle).
If you or any one else has any idea where I could get this code, it would save me a boat load of work.

My knowledge of DVDs is limited. I have had a brief look at ECMA-267, and there are a lot of similarities between CD(ROM) and DVD of course - and some differences: the two most important ones being that everything is scaled up quite a bit for DVD (more of everything) and the abandonment of the “subchannel” nonsense.

So far, I have only looked at ECMA-267. I do not have Book A or anything else at this point. Can you recommend any books or links that may help me understand how to add extra data (like Frame ID, IED and CPR_MAI),

No sorry - Don’t know about these… I don’t know books that cover this (I could point you to some algebra books for the RS and Feedback Shift Register stuff though).

scrambling (I don’t understand the Feedback Shift Register) and adding of the ECC bytes? I do not understand how to implement RS(208,192,17). Right now, I think I understand how to do the 8 to 16 bit modulation (with the 4 states to keep the DC component down), but something tells me, even that might be harder than I think.

…But I do know about these. The Feedback Shift Register scrambler is really easy (although even there is a bit of nice algebra lurking), to understand that kind of stuff, and RS(208,192,17), I think you have no choice but to delve into the mathematics a bit, or to find someone who could. You might want to take this question to a local university, perhaps the math department can supply you with a student to do this as a project (look for an “algebraic coding theory” group or something similar). As to the 8-to-16 modulation, I’m sure you can figure that out, it’s basically EFM and merge bits rolled into one.

One word of advice: it will be very very difficult to produce a working encoder in one go, you are bound to get some little things wrong - and you will have little recourse to find out what is wrong. If I were in your position, I’d make sure to get a low-level bitstream of a known DVD and make a decoder first. This is much simpler (since you will know the stream is good), and it will give you 90% of the code to make the encoder.

Really, the task you have been given seems too difficult to do properly in one go.

For example, to make a “wav2laser” from “laser2wav” would only require implementation of merge-bitpattern decision code, and some code shuffling - perhaps a day worth of work. The fun thing is that, at encoding, you don’t have to know how to deal with actually correcting errors (which is by far the most difficult thing about RS codes, I haven’t tackled this for laser2wav as of yet).

For now, I do not have to worry about the Burst Cutting Area (the human visible writing and zebra codes near the middle).
If you or any one else has any idea where I could get this code, it would save me a boat load of work.

I should imagine so. I don’t know of any such code.

The RS-decoder in laser2wav can probably put you on the right track, but that’s about it. I’m sorry.

Regards, Sidney

~Sidney
Thank you very much for your reply. I was worried you were going to say “there’s no easy way”. I like the idea of asking a math student to help me. Also your idea of getting a real channel bit stream is a good one. The hardware guy is looking into capturing the channel bits from the head of a running DVD reader. So we will be able to check my code.
If this project gets funded, I will keep you informed of what I learn.
Thanks again,
Anthony

Let me get this straight : your company wants to build its own
machines that will burn DVDs in a revolutionary way and you don’t
have a single guy in your team experienced in the ODD field ?
I think you really have no idea how complex a burner is.

Yes, but from a laser2wav to an actual burner ?