C2 accuracy vs efficiency : are they exclusive?

The accuracy of the C2 information returned by the drive, used to detect read errors when ripping audio, is much debated in audio CD ripping forums.

After Andre Wiethoff’s EAC and DAEquality programs, the Beta Nero CDSpeed used by CDRinfo with ABEX test CDs for their latest tests confirms that most drives don’t always set a C2 flag ON when a byte is misread.

Some drives (1, 2)passed the test with 100% accuracy, but the tests never went over 3500 errors per seconds, and other tests shows that the accuracy can vary very much at higher error rates.

In hydrogenaudio, I asked myself if the C2 inaccuracy doesn’t come from the drives switching to more efficient, but less accurate error correction algorithms when they encouter damaged CDs. Like Bobhere suggests in EAC for the C1 level, but for the C2 level.

I have no doc about this and rather thought, according to the CDRinfo writing quality article, that the efficiency of the C2 error correction depended rather on the knowledge of the positions of the errors, given by the previous error correction stages.

Can there be different C2 error correction mechanisms with different accuracy/efficiency ?

It’s a natural property of RS codes that the more bytes you try to correct,
the more likely you can get miscorrections, so the (x,4) strategies should get
lower scores at C2 accuracy tests. However, these tests should also take
into account bytes that are correct but are flagged as wrong, as there are
also several strategies there, and of course the more bytes you flag as
wrong, the easier you can reach a high score.

Thanks,
But wouldn’t the right bytes flagged wrong be interpolated ? If the decoder thinks they are wrong, it will interpolate them, and the testing software will believe they are wrong, since they have become different than the reference.
So we can’t sort out real errors, properly detected, and miscorrections, unless we have a testing media with errors perfectly localized, and a software with a knowledge of the exact positions of the errors on the media.

> But wouldn’t the right bytes flagged wrong be interpolated ? If the decoder
> thinks they are wrong, it will interpolate them, and the testing software
> will believe they are wrong, since they have become different than the reference.

Yes this can happen, but these would be ‘misinterpolations’ which would corrupt
good bytes, but the error pointers would still match. Miscorrections on the other
hand are not flagged since decoder thinks it has corrected errors.

Wouldn’t the most secure way for a drive to handle this is reporting any C2 error (ie. E12+E22+E32+E42) with the C2 error pointers?
This way it wouldn’t matter which strategy the drives chooses, it should always get a 100% score.
(Besides that’s atleast my definition of a C2 error.)
Sure, it would report C2 errors even for sectors it corrected, but for critical applications auch as EAC this wouldnt matter that much. Even if the drive miscorrects an E32 error (e.g.), EAC would simply re-read, and it is unlikely the drive would re-read the exact same pattern and thus miscorrects again.
Of course that C2 data will not be used by the drive for interpolation, only sectors which the C2 decoder flags as uncorrectable will get interpolated.

More secure for what purpose ? With the current
measurement tools, a drive which would report
all bytes wrong independently of the contents of
the disc would get a 100% accuracy score, that’s
just silly. And it certainly would be useless for
any software post-processing, which (I remind
you) is the final goal of this pointers reporting
feature.

Of course a drive reporting everything as wrong would be silly. But a drive reporting any error detected at the C2 stage (correctable or not) should be the most secure way for post processing by tools such as EAC, shouldn’t it?
I mean when EAC is not using C2 pointers it’ll just read every sector atleast twice. So the whole goal of C2 wrt EAC is to speed up the extraction process. If there is no C2 error detected at all by the drive EAC can read just normally, as soon as there is any error it would just fall back to the old way of handling audio extraction. This should give you the speed up you want for clean discs, but be as secure (wrt audio extraction) as you can get with damaged discs.

> Of course a drive reporting everything as wrong would be silly. But a drive reporting
> any error detected at the C2 stage (correctable or not) should be the most secure way
> for post processing by tools such as EAC, shouldn’t it?

It is more secure in the way that you should avoid most miscorrections, so you will
miss less real errors. On the other hand, your software will also spend time trying
to fix imaginary errors on sectors that are correct ; whether this can then degrade
the quality of your rip or not depends on what the software actually does there.

If indeed the improvement in error correction (from the standard E32-uncoreectable of the books about digital audio, to the actual E42-correctable in some drives) is responsible for C2 scores less than 100 % (number of wrong values in the final data flagged as erroneous, over the total number of wrong values), then a solution would be to build chipsets that would allow two reading modes :

-An efficient reading mode with bad C2 reporting (enabled by default)
-Or a poor reading mode (sensitive to scratches, picky about bad CDR media) but with 100% accurate C2 (selectable through a special read command).