3550/4550/4551 re-linking spikes

Why are they created ?

Is it speed changes during the write phase that causes them ? - perhaps the increase in power consumption as the drive speeds up.

Does the internal 2mb run out and cause buffer underrun re-links ?

They don’t happen all the time, some disks don’t have any, whereas the next disk off the spindle may get one. Does the firmware try to adapt to disk quality on the fly ?

My 4551 burns are better than my 3540 ones(overall PIE/PIF count) with the same media, but these spikes get me worried.


Ok, 100+ views and no comments…

Lets put it another way, does anyone with these drives not get re-linking spikes ?

Maybe we can work out what’s causing the problem this way around :slight_smile:


I think it’s due to AOPC, which basically optimizes laser power on the fly. I’d say it’s normal, but causes my liteon 1635s fits during a transfer rate test.

Hi :slight_smile:
I guess folks aren’t sure what your saying. Maybe some scans would be useful.


I thought that these re-linking spikes could be safely ignored, but now I’m not so sure.

Have a look in this thread about bad read transfer benchmarks of discs burned in a NEC 4551:

Crazy LiteOn 1635S Read Transfer benchmarks

Same problem I reported in the thread I just linked to.

I don’t know that there are any way to get rid of these PIF spikes at the re-linking points, except if NEC can solve it with a firmware upgrade, but I don’t expect that to happen.

If it weren’t for these re-linking points causing problems when the discs are read in LiteOn 1635 drives (and possibly other drives?), I would say that the NEC 4551 creates the highest quality burns of my current burners, but I really don’t like seeing this behaviour in read transfer tests! :frowning:

Is AOPC new to these drives ? - that might explain it if so.

My 3540 doesn’t seem to get these link spikes(they’re not nearly as noticeable if it does).


on a couple of discs I tested the 3540 get the same relinking spikes as the 4551… ie
mcc 004 and sony d11

This issue has been observed for as long as we have had error scans, and while it’s not exclusive to NEC burners, they do seem to do it more than others. What, exactly, it represents in terms of the actual burn process is pure speculation. We do know that it is speed-related, and lowering burn speed can eliminate it. When a drive pauses for re-cal or speed shift, it leaves a little “blank” spot in the groove, the length of which needs to be within spec. My hunch is that this is where the issue lies.

As to how it may or may not affect playback, the general idea is that it does not. That’s not to say that some of the picky drives might not stumble at this spot. Errors are errors, but the fact that one scanning drive reports errors at that spot does not mean that all drives will also have errors there. Even if they do, it may not affect playback. So, the bottom line is that it really depends on the playback drive, but the errors are real (in the scanning drive). Another factor, and completely random, is where this little blip occurs in the data stream.

If the burning drive is using static fixed OPC re-linking points, wouldn’t then the location of the “blip” be completely predictable in the data stream and occur in the same places every time?

I think the NEC drives use fixed OPC points, so for a given model NEC drive and a given burnspeed, I think that the “blips” would be in the same location every time, or am I missing something here?

Re-linking due to buffer under-run prevention would of course be randomly distributed however.

is it fixed relink? or does it dynamically adjust? I think my benq and plextor both adjust on the fly… then again they don’t show relinking spikes. I thought the older NECs were the ones that had fixed points and the new ones don’t… oh what do I know anyways. Maybe L&D would know about this.

Only the speed shifts and associated pauses are predictable. With true WOPC, an re-adjustment can occur any time. As to where they fall in the data, I was referring to the data being burned, which would be random. IF an error falls in a specific part of a video datastream, for example, it may not cause a playback issue, whereas if it falls in a critical part, it might. Depends on the reader and how it’s programmed to handle errors. As far as testing and scanning, it wouldn’t matter.