LG drives read Nintendo Wii disc?


#1

I once got told by somebody claiming that LG drives read Wii discs. Well, my BE14NU40 certainly did not.

Were some computer drives actually able to read Wii discs?


#2

Are you referring to Wii Optical Disc (RVL-006) or Wii U Optical Disc (WUP-006).

The RVL-006 discs could be read by some drives. I’m unaware of any reading the WUP-006 discs but I’ve really never looked.

Wii Optical Discs (RVL-006):

Refer to the Afterdawn link for more information.

References:

Nintendo optical discs: https://en.wikipedia.org/wiki/Nintendo_optical_discs
DVD Drives for Wii & Gamecube Backups: https://forums.afterdawn.com/threads/dvd-drives-for-wii-gamecube-backups.585348/


#3

I meant RVL006.
GDR8162B supports them?

I have many working GDR8162Bs from school PC garbage at home. But why do old drives support it but not new drives?


#4

According to numerous sources including the previous link I posted, yes, the GDR-8162B can.

GC-Forever Wiki did a good job of explaining RawDump (https://www.gc-forever.com/wiki/index.php?title=RawDump) without getting overly technical.

Personally, and for better or worse, I have never owned a Nintendo product so I have zero experience with this.


#5

If the GDR8162B (2003) can, why not BE14NU40 (2014)?


#6

I’d assume it would come down to vendor-specific scsi commands. They worked with [some] older drives and but not all or newer.


Universal open-source optical disc drive firmwares?
#7

If you read through the friidump source code (google for it), the hitachi specific code appears to be using a read command which was undocumented. More googling will reveal that some older xbox* consoles used a hitachi/lg dvd drive, where some folks reverse engineered the underlying processes which interacted with a Panasonic MN* chip on the Hitachi/LG dvd drive.

If this is indeed the real technical history, then this undocumented read command is highly specific to these particular older LG drives possibly with a Panasonic MN* chipset.

Current LG drives are using a Mediatek chipset. (LiteOn drives have used mediatek chipsets for a long time).


#8

Last night I decided to revisit friidump.

On current dvdr drives (ie. such as LG gh24nsc0 or LiteOn ihas124f), none of the reading methods worked straight out of the box even when the program is deliberately forced to read the dvd standard.

I ran through some of my old code which adapted the friidump methods, and found that the data being dumped was arranged differently in the dvd drive’s cache. This is something I would have to spend more time deciphering, to figure out what exactly is the arrangement.

Nevertheless, the cache data dumps corresponding to the same set of consecutive sectors from a particular dvd disc, on the surface appeared to be identical from both the gh24nsc0 and ihas124f drives. IIRC, both drives use the same mediatek chip. So most likely this particular data arrangement in the cache is specific to this particular mediatek chip.


#9

Intersting, thanks.
@Ceym6464 discuss with us!


#10

For this type of stuff, you should probably write your own code and do your own investigations.

This is not the sort of thing which would have any easy off-the-shelf tools. The only obvious thing “off the shelf” which would be useful, would be the friidump source code.

If you don’t know how to write C code which handles the scsi interface (ie. reading sectors, etc …), take a look at the code from open source projects like libdvdcss. There’s other pages online (found via googling) which explains how to write C code which handles the scsi interface.

In general for stuff that is this low level, the only way to figure out what’s going on is to write your own code with a lot of trial and error in deciphering the details.


#11

If you’re not willing to do this (ie. writing code, etc …), then there’s other easier hobbies to pursue than examining low level processes in optical disc playback.

This is not a “hobby” for the faint of heart. :wink:


#12

I looked more closely at the cache data dumps from these current lg gh24nsc0 and liteon ihas124f drives. These current mediatek based drives seem to be dumping in sizes of 2236 bytes per sector.

Friidump seems be assuming the cache data dump sizes are either 2064 bytes or 2384 bytes per sector. (My old 2009 era LiteOn rebadge drive was dumping the cache data at 2384 bytes per sector).


#13

Doing some more analysis on these cache data dumps from an lg gh24nsc0 + liteon ihas124f, these 2236 bytes sized sector dumps appear to be possibly after the PI error correction has been performed, but before any PO error correction or descrambling is done.

(If you don’t know what PI, PO, or the scrambling is for dvds, google up the ECMA 267 and 337 documents).

It appears the PI error correction bytes were discarded, but the last 172 bytes of each 2236 bytes sized sector is possibly a portion of the PO error correction bytes.

I haven’t verified yet whether these possible “PO” bytes match up to the mathematical specification quoted in the ecma 267 or 337 documents.


#14

To see whether the cache data dumps look correct, I wrote some more code to do the unscrambling. The scrambling algorithm was a simple linear feedback shift register, where the initial seeds were listed in the ecma 267/337 documents.

Doing this unscrambling for the first several hundred sectors (without doing any PO error correction), it appears the data was a match with the corresponding sector views using isobuster on the original dvd disc.

For the ID and ID error detection bytes (ie. the first six bytes of a 2236 bytes sector dump), the ones I calculated were consistent with the mathematical specification quoted in ecma 267/337,


#15

In each 2236 bytes sized sector dump (from the cache), the ecma 267/337 asserts that bytes 2061 to 2064 are error check bytes on the previous 2060 bytes (without any scrambling from the linear feedback shift register).

So I ran the 2060 bytes after the scrambling was removed, through a subroutine from friidump which does this error check calculation, to see whether it replicates these 2061->2064 error check bytes. It turns out for the first few hundred sectors, these calculations were a respective match with their corresponding 2061->2064 error check bytes.

Reading further through the friidump source code, it appears for nintendo wii discs the schedule of initial seeds for the scrambling algorithm are not officially known. So friidump attempts to guess the seed schedule, where they do an error check calculation on each guess to see whether it matches with the 2061->2064 error check bytes. Initially when friidump is first invoked, it performs this “seed guessing”.

More generally, I haven’t been able to figure out what exactly is the purpose of this scrambling (with the linear feedback shit register). For example in the case of generic dvd, it doesn’t appear to have any obvious cryptographic purposes.


#16

On and off over the past few weeks during evenings, I’ve been writing some code which does the mathematical calculation specification for the PO error correction bytes. After multiple false starts, I finally figured out the mathematical specification for error correction bytes is actually just doing polynomial long division, where error correction bytes are just the “remainder” after doing the polynomial division. (I remember first seeing polynomial long division in high school level math. I don’t know if they still cover it these days in high school).

Treating this as an exercise in polynomial division with various short cuts (ie. using tables), I was able to write some code which does this calculation for 16 consecutive 2236 bytes sectors starting at the correct lbas.

It turns out these calculations were a correct match with the mathematical specification for the PO bytes in the dvd specification in the ECMA-267 document.


#17

I looked at another way of checking the PO bytes.

The friidump code has a section which appeared to be doing Reed-Solomon error correction. Some of this Reed-Solomon code, looks like it was adapted from some widely known sources such as:

http://www.eccpage.com/rs.c
http://www.eccpage.com/new_rs_erasures.c

http://www.eccpage.com/

I had recently purchased a university level textbook which covered topics on error correction codes, and read that calculating the “syndromes” should be zero if there are no errors in the data.

So I adapted some Reed-Solomon code which did this syndrome calculation, and found that it indeed it did produce zeros for the syndromes on the corresponding PO data columns from 16 consecutive 2236 bytes sectors (corresponding to an ecc block starting with lbas which are a multiple of 16) which didn’t have any errors.

I’m guessing when a dvd drive/player first reads an ecc block of 16 sectors, it does this syndrome calculation to check whether there are any errors. All it has to do is look for syndrome calculations which produce nonzero values, which implies there’s an error.


#18

(More generally).

What’s unknown is what exactly does a particular dvd drive/player do, when it comes across these non zero syndromes implying an error.

I strongly suspect the perception of a particular dvd drive being “better at error correction” than another drive, might very well have to do with what the drives do after getting a non-zero syndrome.

For example, does it do:

  • Do an immediate re-read without doing any error correction ?
  • Do a re-read if the PO errors cannot be corrected?
  • Check if particular sectors within a non-zero syndrome block of 16 sectors, passes an unscrambled edc check (ie. bytes 2061-2064) with a correct header id (ie. the first six bytes) ?
  • Re-read only the sector(s) which failed an edc check with an incorrect header id, or discard all the previous data and re-read all 16 sectors?
  • etc …

#19

Simplifying the syndrome calculation code, I also ran it on the id + id error check bytes which are the first 6 bytes of a 2236 bytes sized sector. It also produced zero syndromes if there’s no errors.

On random garbage for the 6 bytes, it usually produced non-zero syndromes.


#20

(I was bored earlier this morning).

I decided to see for a particular 4 bytes id from a dvd sector, how many different 2 bytes for the id error check bytes would give back a zero syndrome. Basically looking for the possibility of false positives. (In principle, there shouldn’t be any false positives).

So I wrote some code which did this as a brute force search of going through all possible 2 bytes for the id error check bytes, and finding which particular 2 byte sets give back a zero syndrome with a particular 4 bytes id. I ran this code for the first several hundred sectors lbas on a particular dvd disc.

Lo and behold (but not too surprising) for each particular 4 bytes id, a brute force search found one corresponding set of 2 bytes which gave back a zero syndrome. Comparing with previous results from actual dvds, they were a respective match for the first several hundred sectors I looked at.

I wouldn’t be surprised if back in the early-1990s when the dvd spec was first being proposed, some company like Toshiba, Sony, etc … actually gave an engineering/tech intern this actual brute force calculation to do as a small project. Basically to explicitly show that there’s no false positives for the particular range of sector ids which dvd would be using.