Next generation Sandforce controllers able to triple capacity of SSD

vbimport

#1

We’ve just posted the following news: Next generation Sandforce controllers able to triple capacity of SSD[newsimage]http://static.myce.com//images_posts/2011/10/lsi-logo.gif[/newsimage]

LSI announced next generation technologies to feature in next generation SandForce driven Solid State Drive (SSD) during the Flash Memory Summit in Santa Clara.

            Read the full article here: [http://www.myce.com/news/next-generation-sandforce-controllers-able-to-triple-capacity-of-ssd-68478/](http://www.myce.com/news/next-generation-sandforce-controllers-able-to-triple-capacity-of-ssd-68478/)

            Please note that the reactions from the complete site will be synched below.

#2

Disk compression for SSDs? How innovative (25 years ago)!
Haven’t bothered with “drivespace” since Win 3.11 and 200 meg drives were considered monster drives.


#3

Sandforce’s current compression scheme gives the user a fixed size drive. 128GB of flash means 128Gb drive size, regardless of how much internal compression can actually reduce this data.
This new scheme takes us 20 years ago when stacker (and others) gave us drives of indeterminate size which varied as the compressibility of the data changed. Personally I never used that stuff and I have no intention to start now. I find the idea of a varying and non-guaranteed drive size horrible.


#4

olddancer and aiolos your comments are not really seeing the big picture. Infrastructure for virtualisation now includes SANs and a whole lot of thin provisioning. Getting extra space out of SSDs at no extra cost is massive for providers. Providers provision SANs and once they hit a certain % full (say 70%), the provision another one. With live storage migration now the norm as well we can move virtual machines between SANs without downtime. The more space available to us the better as SSD SANs are already very expensive.


#5

So you’re saying this will be great for companies who require lots of space and add new storage to their systems automatically? So you think we will only see this in enterprise drives willk?


#6

My complaint about data-compression schemes has been that they were software-based. “Why does an OS offer a data-compression algorithm for one of the Base Functions of an operating system - the File System, the I/O aspects. If the compression algorithm is so great, why not make it the standard for all Read/Writes?”

But having the storage-unit’s controller perform this is interesting, although the prospect of doing it 'after the fact - as the need arises" is a bit disconcerting. “Why not do it for all time? From the first byte forward? Er, the first THREE bytes forward?”

And cynically, I see we’re once again treated to an “up to three times” issue because this one test only doubles it. Yawn… I can hardly wait until SOME marketing hype actually delivers on their promise. I’d love to see Marketing use “at least twice” instead, and for once, give consumers MORE than the marketing hype claimed.


#7

I have an image file used for pre-loading my RAMDisk. The loaded RAMDisk is 3 GigaBytes and the file only uses 12 KiloBytes as an NTFS-compressed file.

So they could have claimed “up to 250,000 times compression” ! :smiley:


#8

I would say that a difference is also that the compression is fully done by the SSD controller, so no system overhead compared to Stacker or Drivespace. Like Willk mentioned I can imagine that using this e.g. on a compressable data like lots of text (don’t know if databases compress data natively) can save a lot of space while there is no performance hit on the system. The OS doesn’t even know the data is compressed. So I’d say this doesn’t go without any benefits. Especially since the price per GB of SSDs is still relatively high…


#9

The difference is 22/7 (3.142857142857142857142857142857142857142857142857…) is compressible, pi (3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348…) isn’t. Are you using this drive to edit text (Highly compressible) or Video (likely already very well compressed).


#10

I suppose that, as long as we don’t hear about LSI taking lessons from Xerox on compressibility algorithms…


#11

@willk
Whether in enterprize of private use having a drive with varying capacity is a huge problem. Especially if you have a multitude of files with wildly varying compressibility. Just try to imagine what would happen in a large datacenter as its total capacity fluctuated randomly.
If you use them in a very specific environment where compressibility of the stored data is more or less constant then this scheme could be useful (if implemented without all the problems that software compression tools gave us 20 years ago).