Update: Random write performance of the drive we reviewed may change with future firmware updates. Click here to find out more.

Corsair’s late entry into the SSD market meant that it missed the JMicron mess of the early days, but it also meant that Corsair missed the bulk of the early Indilinx boat. Not interested in making the same mistake twice, Corsair took the same risk as many other SSD makers and got in bed with a little company called SandForce.

Widely believed to be the creator of the technology behind Seagate’s first SSD, SandForce has been popping up all over the place lately. We first encountered the company late last year with a preview of OCZ’s Vertex 2 Pro. SandForce's technology seemed promising.

The problem of maintaining SSD performance is a lot like keeping a room tidy and clean. If you make sure to put things in the right place the first time and don’t let dirt accumulate, you’ll end up with an organized, pristine looking room. However if you just throw your stuff around and let stains go untouched, you’ll spend a lot more time looking for things and probably end up ruining the place.

The same holds true for SSDs. If the controller doesn’t properly place data it’ll take longer to place new data. And if the controller doesn’t properly wear level, you’ll end up reducing the life of the drive.

I’ve explained the how behind all of this countless times before, so I’ll spare you the details here. Needless to say, it’s a juggling act. One that requires a fast enough controller, a large amount of fast storage (whether it is on-die cache or off-chip DRAM) and a good algorithm for managing all the data that gets thrown at it.

At a high level Crucial/Micron, Indilinx and Intel take a relatively similar approach to the problem. They do the best with the data they’re given. Some do better than others, but they ultimately take the data you write to the drive and try to make the best decisions as to where to put it.

SandForce takes a different approach. Instead of worrying about where to place a lot of data, it looks at ways to reduce the amount of data being written. Using a combination of techniques akin to lossless data compression and data deduplication, SandForce’s controllers attempt to write less to the NAND than their competitors. By writing less, the amount of management and juggling you have to do goes down tremendously. SandForce calls this its DuraWrite technology.

DuraWrite isn’t perfect. If you write a lot of highly compressed or purely random data, the algorithms won’t be able to do much to reduce the amount of data you write. For most desktop uses, this shouldn’t be a problem however.

Despite the obvious achilles’ heel, SandForce’s technology was originally designed for use in the enterprise market. This lends credibility to the theory that SandForce was Seagate’s partner of choice for Pulsar. With enterprise roots, SandForce’s controllers and firmware are designed to support larger-than-normal amounts of spare area. As you may remember from our earlier articles, there’s a correlation between the amount of spare area you give a dynamic controller and overall performance. You obviously lose usable capacity, but it helps keep performance high. SandForce indicates that eventually we’ll see cheaper consumer drives with less NAND set aside as spare area, but for now a 128GB SandForce drive only gives you around 93GB of actual storage space.

Introducing the SF-1200

The long winded recap brings us to our new friend. The Vertex 2 Pro I previewed last year used a full fledged SF-1500 implementation, complete with ridiculously expensive supercap on board. SandForce indicated that the SF-1200 would be more reasonably priced, at the expense of a performance hit. In between the two was what we got with OCZ’s Vertex Limited Edition. OCZ scored a limited supply of early controllers that didn’t have the full SF-1500 feature set, but were supposedly better than the SF-1200.

Today we have Corsair’s Force drive, its new performance flagship based on the SF-1200. Here’s what SandForce lists as the differences between the SF-1500 and SF-1200:

SandForce Controller Comparison
  SF-1200 SF-1500
Flash Memory Support MLC MLC or SLC
Power Consumption 550 mW (typical) 950 mW (typical)
Sequential Read/Write Performance (128KB) 260 MB/s 260 MB/s
Random Read/Write Performance (4K) 30K/10K IOPS 30K/30K IOPS
Security 128-bit AES Data Encryption, Optional Disk Password 128-bit AES Data Encryption, User Selectable Encryption Key
Unrecoverable Read Errors Less than 1 sector per 1016 bits read Less than 1 sector per 1017 bits read
MTTF 2,000,000 operating hours 10,000,000 operating hours
Reliability 5 year customer life cycle 5 year enterprise life cycle

 

The Mean Time To Failure numbers are absurd. We’re talking about the difference between 228 years and over 1100 years. I’d say any number that outlasts the potential mean time to failure of our current society is pretty worthless. Both the SF-1200 and SF-1500 are rated for 5 year useful lifespans, the difference is that SandForce says the SF-1200 can last for 5 years under a "customer" workload vs. an enterprise workload for the SF-1500. Translation? The SF-1500 can handle workloads with more random writes for longer.

The SF-1500 also appears to be less error prone, but that’s difficult to quantify in terms of real world reliability. The chip sizes are identical, although the SF-1500 draws considerably more power. If I had to guess I’d say the two chips are probably the same with the differences amounting to be mostly firmware, binning and perhaps fusing off some internal blocks. Maintaining multiple die masks is an expensive task, not something a relative newcomer would want to do.


Note the lack of any external DRAM. Writing less means tracking less, which means no external DRAM is necessary.

Regardless of the difference, the SF-1200 is what Corsair settled on for the Force. Designed to be a high end consumer drive, the Force carries a high end price. Despite it’s 100GB capacity there’s actually 128GB of NAND on the drive, the extra is simply used as spare area for block recycling by the controller. If we look at cost per actual GB on the drive, the Force doesn’t look half bad:

SandForce Controller Comparison
Drive NAND Capacity User Capacity Drive Cost Cost per GB of NAND Cost per Usable GB
Corsair Force 128GB 93.1GB $410 $3.203 $4.403
Corsair Nova 128GB 119.2GB $369 $2.882 $3.096
Crucial RealSSD C300 256GB 238.4GB $680 $2.656 $2.852
Intel X25-M G2 160GB 149.0GB $489 $3.056 $3.282
OCZ Vertex LE 128GB 93.1GB $394 $3.078 $4.232

 

But looking at cost per user addressable GB isn’t quite as pretty. It’s a full $1.12 more per GB than Intel’s X25-M G2. It's also a bit more expensive than OCZ's Vertex LE, although things could change once Corsair starts shipping more of these drives.

Power - A Telling Story
Comments Locked

63 Comments

View All Comments

  • Anand Lal Shimpi - Wednesday, April 14, 2010 - link

    That I'm not sure of, the 2008 Iometer build is supposed to use a fairly real world inspired data set (Intel helped develop the random algorithm apparently) and the performance appears to be reflected in our real world tests (both PCMark Vantage and our Storage Bench).

    That being said, SandForce is apparently working on their own build of Iometer that lets you select from all different types of source data to really stress the engine.

    Also keep in mind that the technology at work here is most likely more than just compression/data deduplication.

    Take care,
    Anand
  • keemik - Wednesday, April 14, 2010 - link

    Call me anal, but I am still not happy with the response ;)
    Maybe the first 4k block is filled with random data, but then that block is used over and over again.

    That random read/write performance is too good to be true.
  • Per Hansson - Wednesday, April 14, 2010 - link

    Just curious about the missing capacitor, will there not be a big risk of dataloss incase of power outage?

    Do you know what design changes where done to get rid of the capacitor, where any additional components other than the capacitor removed?

    Because it can be bought in low quantities for a quite ok retail price of £16.50 here;
    http://www.tecategroup.com/ultracapacitors/product...
  • bluorb - Wednesday, April 14, 2010 - link

    A question: if the controller is using lossless compression in order to write less data, is it not possible to say that the drive work volume is determined by the type of information written to it?

    Example: if user x data can be routinely compressed at a 2 to 1 ratio then it can be said that for this user the drive work volume is 186GB and cost per GB is 2.2$.

    Am I on to something or completely of the track ?
  • semo - Wednesday, April 14, 2010 - link

    this compression is detectable by the OS. As the name suggests (DuraWrite) it is there to reduce the wear on the drive which can also give better performance but not extra capacity.
  • ptmixer - Wednesday, April 14, 2010 - link

    I'm also wondering about the capacity on these SandForce drives. It seems the actual capacity is variable depending on the type of data stored. If the drive has 128 GB of flash, 93.1 usable after spare area, then that must be the amount of compressed/thinned data you can store, so the amount of 'real' data should be much more.. thereby helping the price/GB of the drive.

    For example, if the drive is partly used and your OS says it has 80 GB available, then you store 10 GB of compressible data on it, won't it then report that it perhaps still has 75 GB available (rather than 70 GB as on a normal drive)? Anand -- help us with our confusion!

    ps - thanks for all the great SSD articles! Could you also continue to speculate how well a drive will work on a non trim-enabled system, like OS X, or as a ESXi Datastore?
  • JarredWalton - Wednesday, April 14, 2010 - link

    I commented on this in the "This Just In" article, but to recap:

    In terms of pure area used, Corsair sets aside 27.3% of the available capacity. However, with DuraWrite (i.e. compression) they could actually have even more spare area than 35GiB. You're guaranteed 93GiB of storage capacity, and if the data happens to compress better than average you'll have more spare area left (and more performance) while with data that doesn't compress well (e.g. movies and JPG images) you'll get less spare area remaining.

    So even at 0% compression you'd still have at least 35GiB of spare and 93GiB of storage, but with an easily achievable 25% compression average you would have as much as ~58GiB of spare area (45% of the total capacity would be "spare"). If you get an even better 33% compression you'd have 66GiB of spare area (51% of total capacity), etc.
  • KaarlisK - Wednesday, April 14, 2010 - link

    Just resize the browser window.
    Margins won't help if you have a 1920x1080 screen anyway.
  • RaistlinZ - Wednesday, April 14, 2010 - link

    I don't see a reason to opt for this over the Crucial C300 drive, which performs better overall and is quite a bit cheaper per GB. Yes, these use less power but I hardly see that as a determining factor for people running high-end CPU's and video cards anyway.

    If they can get the price down to $299 then I may give it a look. But $410 is just way too expensive considering the competition that's out there.
  • Chloiber - Wednesday, April 14, 2010 - link

    I did test it. If you create the test file it compressable to 0 percent of its original size.
    But if you write sequential or random data to the file you can't compress it at all. So i think that iometer uses random data for the tests. Of course this is a critical point when testing such drives and I am sure that anand did test it too before doing the tests. I hope so at least ;)

Log in

Don't have an account? Sign up now