This would be my suggestion using Windows, although not as realistic as pulling the plug to the PC:
Go into the device manager and disable write caching on the SSD:
[li]Prepare a folder with a large number of small files (e.g. 5000 x 1MB files) on a RAM drive or fast read source.
[/li][li]Create an empty folder on the SSD as the target…
[/li][li]Set up the SSD so it can be quickly unplugged, such as in a hot-swappable bay if possible.
[/li][li]Start the file copy of this folder using a method that logs what is copied.
[/li][li]Disconnect the SSD before the copy completes.
The following is a command line example that logs while copying:
xcopy w:\fileset* x:\fileset >w:\log.txt /j
This assumes ‘w:’ is the drive with the source file set and ‘x:’ is the SSD being tested. ‘/j’ specifies not to buffer so that each file is read and written simultaneously without read caching.
xcopy shows the current file name that is being copied, so the last file in the log.txt file will be the point where the SSD was disconnected. In theory, every file leading up to that file should have been copied successfully.
As a test, I would suggest using a file compare utility to compare every copied file with the original. This can be done with the command line as follows, assuming the same drive letters:
for %f in (x:\fileset*) do fc /b w:\fileset%~nxf x:\fileset%~nxf >>w:\complog.txt
Once this completes, the complog.txt file will have a list of all the files compared. The last file should obviously show a lengthy series of hex byte mismatches. However, if any files leading up to the last also show mismatches, then those files were not written successfully. If any files towards the end of the copy log.txt file are missing from the SSD, then this is also a clear sign that cached data was lost, in this case file entries in the file system.
[B]Edit:[/B] I didn’t see Dee’s reply until I posted and never thought about the complexity involved.