Important Defects in PlexTools 2.17 - Part I

In the calculations that result from P1P2 measurements, there seem to be significant calculation errors if the High Accuracy option is not chosen.

Look at the attached image. The graph represents P1P2 for the 1-2GB region of a typical Ritek DVD recorded on my 712A, FW 1.05. The top of the graph is 50. I re-read the same region five times but used different accuracy settings. I took screen snapshots of the results after each scan, including the log showing Avg, Max, and Total. They are positioned side-by-side for comparison purposes.

First, it appears the different accuracy settings work like this: PlexTools scans 4096 sectors (blocks?). Then, depending on the Accuracy settings, it:

—High - goes on to scan every sector, a 1X scan.
—Good - jumps to 8192 and scans another 4096, repeating this pattern, a 2X scan scanning 1/2 of the sectors.
—Middle - jumps to 12288 for the next 4096, repeating, a 3X scan of 1/3 of the sectors.
—Lower - jumps to 16384 for the next 4096 sectors, a 4X scan of 1/4 of the sectors.
—Lowest - jumps to 24574 for the next 4096 sectors, a 6X scan of 1/6 of the sectors.

As you can see, anything but highest accuracy uses a sampling scheme of 4096-sector segments variously spaced apart for ratios up to 1 in 6 at lowest accuracy. Obviously, since not every sector is scanned in those cases, the accuracy is proportionately lower, as there are sectors that were skipped. But the overall performance can be reasonably inferred if there are no radical spikes or precipitous slopes in the graph, at a considerable time savings. Fine. We can live with that if the poster declares what accuracy was chosen.

Trouble is, the calculations are then all off! While the graphs are very similar (I presume what is skipped is interpolated), the “Average” varies widely depending on that accruacy setting. So what exactly is Average?? What parameter is divided by what parameter to get it? Total erroneous sectors divided by the total? No, because that doesn’t work out. Can anynone explain what it means?

In any case, the values get radically lower as the accuracy is decreased, and not by the ratio of the sampling. Clearly there is a stupid and obvious error is these calculated values — stupid because it shows considerable negligence in the programming. Who writes this stuff? Do they throw in a “new feature” half-baked and wait to see the fallout? Has someone actually thought this through before releasing the software? This is not a repectable way to create software, and I wonder what other errors there are in PlexTools.

(Well, there is something else, with jitter measurements. See part II in another message thread.)

I am distressed to see that the otherwise excellent DVD-writer reviews over at dvdrinfo.com now use the “Middle” accuracy setting, meaning that those Log calculations can’t be trusted nor compared with valued obtained at other accuracy settings. Begging here: Please go back to High Accuracy!

Very interesting findings and nice presentation… how about email plextor and ask how they calculate each of the accuracy settings.

Good info kentech, would be nice to get some feedback on this from people like Jamos and OC-Freak (they know a lot more about this error scanning stuff than me ;)). Would also be interesting to e-mail Plextor with this although I doubt they will give you any information on it. Worth a shot though…

I looked at your numbers (thanks, btw) and it seems that:

  1. If Hi-acc is correct, then ( Sum of PIE error values ) / 60680 error samples = 1.85 PIE / unit of measure

  2. Let’s assume that the distribution of PIE error values does not change at ‘Good’ Scanning accuragy. This means that the average is the same, but the amount of samples is halved. Thus the calculation should be:

30739 * 1.85 / 30739 = 1.85

However as the average is 0.47, this means that they are dividing the sum with:

30739 * 1.85 / X = 0.47
<=>
X= 30739 * 1.85 / 0.47 = 120993 number of samples (~ 2 x 60680)

So, they have in fact DOUBLED the amount of sample numbers for the calculation, when they should have HALVED it (they are scanning halve the number of samples!)

  1. The story remains the same for ‘Middle’, ‘Low’ and ‘Lowest’ accuracies: the number they are dividing (if we assume that average should stay roughly the same*) is again c. 120 000, when it should be the number of samples for that particular scan speed.

I think you are correct and there is a clear miscalculation in the algorithm.

Good find!

regards,
halcyon

  • I say roughly, because due to the statistical nature of the reading the exact number of errors is never the same and speed has a slight effect on these values as well (with the scan speed used by Plextools/px712).

I appreciate the information kentech. I am not that comfortable with the sampling aspect of it and am really glad you brought this to my attention. I wonder if it would be possible to do the faster scan rate, such as 4x, while still scanning all the sectors. Not as fast, but I would find that more useful.