Tests on weak sectors

vbimport

#1

This post is rather long, but it will be very interesting to those who are seeking to know more about weak sectors - this will also raise quite a few questions :confused: Here's my lab work....

With my previous tests on max.iso (for those who don't know it's a file with weak sectors, see max.iso thread) one of my test result had been wrong. I have noticed that my test result of trying to record the max.iso file as an audio CD wasn't accurate and inconclusive.

Let me tell you what my test involved:

Preparations

I took the max.iso and made it into a multiple of 2352 bytes by appending bytes of zeroes to the end (did a bit of simple maths to calculate how much to add). Then I made a cue sheet:

FILE "Z:\MAX.ISO" BINARY
TRACK 01 AUDIO
INDEX 01 00:00:00

Okay, I can't use this with CloneCD to burn it - because it does'nt accept cue sheets. I used Daemon Tools to mount the cue sheet & CD image and used CloneCD to get a CloneCD image of it (also verified the main binary files just be careful) and lastly burnt it to a CDRW at 4x writing with CloneCD.

Hardware/Software used

For writing: Plextor IDE 8432T, firmware 1.09 (latest)
For reading: Toshiba DVD SD-1502, firmware 1816 (latest)

CloneCD v4.0.1.3
Daemon Tools v3.10
WinHex (for padding of zeroes and comparison of files)

What was wrong with the test?

There were 3 problems:

  1. My Plextor IDE 8432T drive (or the software - I'm not sure exactly what is the cause) doesn't write the first few sectors of the image on to the CD. So I get a copy that had always been truncated at the start

  2. Strange, but my Toshiba DVD could not read the last few sectors of the copy.

  3. Also, with CloneCD v4.0.1.3 it some how resulted in a copy that was modified - I noticed this when I tested with a CDROM image with no weak sectors and using the same technique. Settings I used for writing: AWS off, Don't Repair Subs off, 8x writing.

Please, I don't wish to get into trouble with Olli, so lets not jump to any conclusions on CloneCD until we speak to him - anyway it's not the issue here. The newest CloneCD probably has this fixed already - I'm not sure.

The problems I described are conclusive because I have tried it 5 times over just to make sure.

A change of test strategy

Any way, I decided to change my test strategy...

I used the same technique of padding the image, but this time I added a further 2 seconds of sectors to the front of the padded max.iso image and also a further 2 seconds of sectors at the end.

This time, I used Golden Hawk's CDRWIN 3.8G to burn the cue sheet and image to the CDRW again at 4x, CDRWIN's raw mode: off.

Results

I read the CDRW with CDRWIN and Toshiba DVD and then I cut out the paddings of zeroes so that I could compare it with the original.

Finally, the result, the comparison gave 3 differences. But then I thought about it and suggested maybe those were because of jitter. So I re-read it - this time the comparison gave 0 differences!!! I tried 4 times and all gave 0 differences!!!

Now, I'm getting somewhere (my eye brow raised - this is strange). I would expected differences. This raises the facts:

The writing of weak sectors are perfectly writable and the data are all intact and readable when you are writing an audio CD.

I further my tests....

I did the same tests on the posted SD2 images (see 'EFM Testing Images Mirror' thread) that I found in this forum and also the famous generated non writable weak.iso image posted by blackcheck (see 'Almost correct EFM recording' thread).

All of those resulted in 0 differences in comparisons to the original I started with.

So what is going on here?

It looks like the problem is not EFM encoding problem. Also, don't tell me bullshit - interpolation cannot help here!! You do the maths - it won't give you those bytes!!

To me, it looks like the writer when it is fed with those weak sectors is probably truncating some bytes and so results in data CDs that has sectors different from the original - in most cases unreadable sectors - because of missing sync or header.


#2

I took the max.iso and made it into a multiple of 2352 bytes by appending bytes of zeroes to the end (did a bit of simple maths to calculate how much to add). Then I made a cue sheet:

this way you do not get weak sectors at all. the user data
begins at offset 0x10 in the raw sector. also you have to append your bytes to each sector.

just use isobuster to convert your user iso to raw.


#3

Are you sure about that?

Okay, I am going to test this using ISO Buster for the conversion as you mentioned.


#4

Just tested it and there were 0 differences.

I tested max.iso with ISOBuster to extract it as raw 2352 file and burnt it as audio with Plextor 8432T and CDRWin and then read with Toshiba DVD.

Mind you I added a 2 second of zeroes (150x2352) to the beginning and the ending of the file so that I can get all bytes from the copy - this does not affect the weak sectors because it’s just extra 2352 sectors.

This time there are weak sectors!!!

Blackcheck, I noticed in one thread you said that you have tried this and you couldn’t even read some of the sectors - I think you should try the test again on a different drive. You have to remember, to add extra pad sectors to the beginning and ending - otherwise you’ll miss out some data. This is a flaw in most writers when writing audio - some sound bytes get cut off at the start and cannot read some at the end.

Also, when comparing you can’t just compare the copy with the original directly - you have to cut out the zeroes first.

If you don’t believe me then just try it out for yourself. You’ll get the same results.


#5

audio data doesn’t get scrambled. thus no weak sectors.
try again as data and you will fail.


#6

Does this suggest that weak sectors are causing the writer to do incorrect scrambling of data sectors?

If so, then this is the problem and has nothing to do with incorrect EFM encoding at all.

Or, are you saying that I must have a different sample of sectors which has weak sectors for audio. Because this doesn’t make sense - this would mean that recording as audio cannot have weak sectors.


#7

Does this suggest that weak sectors are causing the writer to do incorrect scrambling of data sectors?

why not read the readme of efmgame ?

If so, then this is the problem and has nothing to do with incorrect EFM encoding at all.

how often do you people need to hear it ?
efm is nothing but a table lookup function.
nothing can go wrong!

Or, are you saying that I must have a different sample of sectors which has weak sectors for audio. Because this doesn’t make sense - this would mean that recording as audio cannot have weak sectors.

yes, i think it should be possible to generate weak audio sectors, too.
just check how audio cds are encoded and make sure the efm
encoder is fed with a regular pattern.


#8

Thanks, this cleared up things for me - now I understand what is happening. :slight_smile:

I did read the readme.txt - but didn’t understand it before and now with doing some practical tests I understand it more clearly.

You guys are ahead of us and I’ve just got to the grips with the truth now. Things has been clouded for me cause different views (some of which are inaccurate) are expressed in this forum.

Great, just being curious, if someone has some spare time, can they make a raw audio image file containing weak sectors?


#9

@Truman
If you correct the read/write offsets of your drives, you don’t need to add zero bytes to your file.
As far as I know, EAC is the only program, able to correct offsets on reading/writing.
But keep in mind, that Toshibas always will miss some bytes, because they can’t overread into the 1st pregap/lead-out.
So you’ll have to use your PlexWriter for re-reading …


#10

@blackcheck
If audio data doesn’t get scrambled, why not recording the weak-sectors-containing image as audio and then changing the TOC to report it as MODE1/2352?
As far as I know, BlindWrite did exactly this, to support recorders which weren’t able to write bad sectors in regulary RAW mode (e.g. SafeDisc v1). I think, this was the “birth” of the SAO-Mode. At least it was the first time, a recording software supported this mode. CloneCD later adapted this mode (early versions of CCD supported only DAO).


#11

Thanks for the tip little-endian!

Oh! I see, part of the audio is recorded in the pregaps and postgaps. So that’s why I get cut off sound bytes.


#12

@Truman
Yeah, it’s exactly like this.
Unfortunately all drives available have an offset error, more or less.
See “The truth about offsets” for details.


#13

@blackcheck
If audio data doesn’t get scrambled, why not recording the weak-sectors-containing image as audio and then changing the TOC to report it as MODE1/2352?

think about it again. what do you think happens if you treat audio
data (unscrambled) as MODE1/2352 ?

if that writing mode really works how you describe it, the software
has to prepare (eg scramble) the data before writing.