In the calculations that result from P1P2 measurements, there seem to be significant calculation errors if the High Accuracy option is not chosen.
Look at the attached image. The graph represents P1P2 for the 1-2GB region of a typical Ritek DVD recorded on my 712A, FW 1.05. The top of the graph is 50. I re-read the same region five times but used different accuracy settings. I took screen snapshots of the results after each scan, including the log showing Avg, Max, and Total. They are positioned side-by-side for comparison purposes.
First, it appears the different accuracy settings work like this: PlexTools scans 4096 sectors (blocks?). Then, depending on the Accuracy settings, it:
—High - goes on to scan every sector, a 1X scan.
—Good - jumps to 8192 and scans another 4096, repeating this pattern, a 2X scan scanning 1/2 of the sectors.
—Middle - jumps to 12288 for the next 4096, repeating, a 3X scan of 1/3 of the sectors.
—Lower - jumps to 16384 for the next 4096 sectors, a 4X scan of 1/4 of the sectors.
—Lowest - jumps to 24574 for the next 4096 sectors, a 6X scan of 1/6 of the sectors.
As you can see, anything but highest accuracy uses a sampling scheme of 4096-sector segments variously spaced apart for ratios up to 1 in 6 at lowest accuracy. Obviously, since not every sector is scanned in those cases, the accuracy is proportionately lower, as there are sectors that were skipped. But the overall performance can be reasonably inferred if there are no radical spikes or precipitous slopes in the graph, at a considerable time savings. Fine. We can live with that if the poster declares what accuracy was chosen.
Trouble is, the calculations are then all off! While the graphs are very similar (I presume what is skipped is interpolated), the “Average†varies widely depending on that accruacy setting. So what exactly is Average?? What parameter is divided by what parameter to get it? Total erroneous sectors divided by the total? No, because that doesn’t work out. Can anynone explain what it means?
In any case, the values get radically lower as the accuracy is decreased, and not by the ratio of the sampling. Clearly there is a stupid and obvious error is these calculated values — stupid because it shows considerable negligence in the programming. Who writes this stuff? Do they throw in a “new feature†half-baked and wait to see the fallout? Has someone actually thought this through before releasing the software? This is not a repectable way to create software, and I wonder what other errors there are in PlexTools.
(Well, there is something else, with jitter measurements. See part II in another message thread.)
I am distressed to see that the otherwise excellent DVD-writer reviews over at dvdrinfo.com now use the “Middle†accuracy setting, meaning that those Log calculations can’t be trusted nor compared with valued obtained at other accuracy settings. Begging here: Please go back to High Accuracy!