Originally posted by millenium
[B]It's not only a measurement of drive qualitty, but also of media.
Actually it's a measure of readability of a disc (only in terms of error frequency) on a particular drive.
It does not prove that a particular media is bad on all drives.
It does not prove that a particular burn (using a certain drive) is bad on all drives.
It shows that a particular burn, on a particular media, at a particular speed, using a certain burner, produces some statistically remarkably steady distribution of read errors on a certain reader using a certain read speed with a certain test utility.
It could be, that these results can be generalised to a some degree to a larger population of drives. Meaning, that a certain brand of disc burned on a certain burner, is so-and-so error prone on various readers.
Or it could be, that there is very little correlation. Or something in between.
Nobody has actually tried to find this out yet.
3.5 avg is crappy media and 0.8 is acceptable media.
No, assuming that the results were universally useful to all drives, I still wouldn't agree with the above assessment.
Average 3.5 C1 count is by not necessarily bad, just as rdgrimes has previously corrected me on this issue.
And 0.8 can be considered much more than just "acceptable".
Of course, everything is relative and there are better media, but even those C1 counts can be considered very acceptable as long as there are no C2.
With 7S you can't really say if media is good or not.
You can't do that with any drive. I mean, you cannot universally say if a media is good/bad, unless you test that disc on every possible drive and find that it truly is bad on all of them.
I have some media that measures really well on LiteOn 48246S when it was burned on LG4480b, but it measures quite bad when burned on Plextor Premium.
Now, is this media good or bad?
We can only say that for the reading drive (LiteOn 48246s), the amount of errors seem to vary on that media, based on which drive did the burning.
Further, if a drive reports a number of C1 which is lower than than with any other drive, it doesn't mean that particular drive is more accurate or the best for measuring error rates.
It implies that the drive with the lowest C1 counts on a particular media probably has least trouble reading that disc. However, even that is not proven, it could also be due to other reasons (inaccuracy in counting/reporting errors for example).
Also how can media qualitty be compared then with one drive you get 0.3 and using the other, but the same drive with same media you get 1.2 ?
If you mean that the exact same burned disc produces different error rates when measured on two instances of a certain drive model (two LiteOn 52246S drives for example), then it's probably down to unit-to-unit variation.
However if the same brand of media (two different burns on the same brand of media) measure differently, then it could also be down to the media or burn specific issues. Not all discs of the same media brand always measure exactly the same, even on same test drive (one unit). Media quality varies as well.
It's important to remember that the whole media, burn and read quality is a statistical equation. Things are not always exactly the same from media to media, from burn to burn, from burner to burner or from reader to reader. However, if we take a big enough sample, most results usually converge (or at least, this is the assumption in general) towards some useful statistical averages which we can use as a rule of thumb.
I hope that made things more clearer and not muddier. My apologies for excessive wordiness. English is not my native tongue and explaining these things as accurately as I can, is not my strong skill.