1.1 ecc vs 1.4 ecc

hey

i see that most people get a 1.1-1.2 average ECC interval when scanning (at 8X), but i’m getting around 1.45 because it’s an external drive (lite-on 165p6s @MS0R) connected through usb.

  1. is there any method that possibly lowers the ecc interval (besides reducing the scan speed)?

  2. given the longer interval, how should i interpret the scan results, so that i could compare with normal scans (~1.2 ecc)?

thx

Getting a faster system, getting a better USB adapter, or getting the best external enclosure (chipset) is all I can think of. Only the USB enclosure is really worth considering IMO. Enclosures based on Genesys GL811E chipsets are best for minimizing dropped samples IIRC.

  1. given the longer interval, how should i interpret the scan results, so that i could compare with normal scans (~1.2 ecc)?
    You can compare PIE/PIF maximum values directly and jitter values also.

PIE and PIF values need to be normalized by multiplying the total and average values with the reported scanning interval.

Example:
Original values:
Maximum PIE = 37
Total PIE = 6345
Average PIE = 0.35
Maximum PIF = 2
Total PIF = 859
Average PIF = 0.01
Scanning interval = 1.21 ECC

Normalized values:
Maximum PIE = 37
Total PIE = 6345 * 1.21 = 7677
Average PIE = 0.35 * 1.21 = 0.42
Maximum PIF = 2
Total PIF = 859 * 1.21 = 1039
Average PIF = 0.01 * 1.21 = 0.01
Normalized Scanning interval = 1 ECC

Scanning is not an exact science, so don’t take reported or calculated PIE/PIF at face value. There can easily be 30% variance between two consecutive scans of the same disc in the same drive at the same speed.

EDIT:
Note: The normalization example above is for 1 ECC scanning only!

The normalizing factor is ([actual scanning interval] / [requested scanning interval]), so please don’t divide by 8 for an 8 ECC scanner such as e.g. a BenQ drive.

Thanks DrageMester. Quite a calculation exercise you got there, down to last digit. :slight_smile:

BTW, what formula to use when we calculate removed glitches then, like reported in this pic.

32591 removed glitches?! :eek:

I would simply discard the scan as meaningless with that many “glitches” removed.

In general, I don’t see how you could calculate “normalized values” from glitches, because glitches, unlike dropped samples, cannot be considered to follow the same random distribution as the successful samples.

Heh, then I’m happy because it’s not my scan. :wink: But your boss will no be that happy over your statement. :bigsmile:

I’m sure many here, me included, would like to hear more in details what glitches really are representing.
Thanks.

Scanning is not an exact science

Is the thing you must remember, and should be in a big font across the top of this forum :slight_smile:

The thing that helped me was a lot of reading (this forum), and scanning, once I started to understand what [U]my[/U] drives were telling me I got a better handle on things. I think its also a good idea to run the disks a few times and see just how much variation there is between each run, until you learn it, I do more scans of each disk now, at least a 4x and a 12x or 16x to see if the numbers change a large amount, and often on the better disks they dont. This is the problem with my NEC you can run the same disk over and over and you would not be able to tell that it was the same disk. But it has proved its worth in the TR test area where none of my other drives come close.

I dont know just how many parts are being dropped, but you should be able to get a good understanding even so, once you learn the drive. But the glitches thing ? I have never (as far as I remember) have a scan where it has had to remove glitches.

I also ‘think’ that my Lite0n might be a better scanner with the MSOP firmware, I will be testing it more at a later date, as I could be wrong.

:iagree: :clap:

Just curious- why are you modifying the [I]averages [/I]as well? If the samples are representative, the average number of PI/PIF per sample shouldn’t depend on the coverage. The totals will increase, of course: they’re the product of rate x number of samples. Increasing the effective sample count (by applying the factor for the scanning interval) takes that into account.

G

The average PIE values calculated by CDSpeed (v 4.7.5.0) is the average number of reported PIE per 8 ECC blocks.

You can test this yourself with this formula:

Avg PIE per 8 ECC blocks = ( [total PIE] / [size in MB] / [32 ECC block per MB] ) * 8 ECC blocks

The used formula ignores the reported scanning interval, so the average value also needs to be normalized when comparing scans.

OK, that’s fair- CD-Speed is keeping the sample count constant.

I’m more familiar with how KProbe does things (the availability of raw data sure helps :smiley: ) and there the averages are based on the actual sample counts (with the known bug for the PIF averages).

G

nice inputs :iagree:

yeah i know that scanning is not the best method to check the data integrity, but i simply like to do it. It gives me a general idea of what media is best and worse, a plausible estimation of the lifetime and the degradation of a dvd as time passes by. And the burn quality of each drive as well.

I also like to do lots of trial and error :slight_smile:

As for the variance, i find the lite-on drives to be very reliable in that subject. that’s why i picked a lite-on.

another question: how can I see the chipset model on Windows, without disassembling the external enclosure?

also, is 1.45 ECC really bad, for usb? what’s the average ecc interval for internal lite-on drives at 8X? (it’s impossible to view it in the screenshots)

You might get an idea from device manager. You find it in the USB section. Here’s what my good Cypress chipset looks like.



nope…

this is strange because the drive was bought as external (165P6SU)…
so i thought that the case & chipset were appropriate for this burner


I never buy external drives, neither hard drives nor opticals. I buy a case and put a regular drive in it.

I bought my first (and only) external about 4 years ago. An expensive Maxtor hard drive with USB and firewire. You’d think Maxtor would know how to build an external drive, but you’d be wrong.

It had no fan. It overheated and the entire file system became corrupted. I cut a hole in the top, installed a small fan on top, and am still using it today. It looks strange, it’s noisy, but it works.

Oh by the way, it had no power switch either. I have to unplug the power cable to shut it off.

[I]Off topic[/I]

For average Joe this advice can be very risky because you never know if “the case” ie. enclosure is compatible with your optical drive. (HDD’s have much less compatibilities issues.)
Check out this thread.

BTW, I’m very satisfied with my external Samsung SE-S184M. :smiley:

You are correct, pinto2. I checked out that thread fairly thoroughly and it helped me buy a good enclosure.

Another approach is to eliminate the chipset entirely. A SATA burner in a SATA enclosure plugged in to a SATA port on the computer would do that.

Of course then the problem is getting a good SATA port on the computer with the driver and firmware that can handle SATA optical drives.

I have a PCI to SATA card that uses the Silicon Image 3114 chip and that does the job.