A week after I previewed OCZ’s IBIS and the HSDL interface, SandForce revealed the specs of its next-generation enterprise SSD controller. The specs for the SF-2000 series call for up to 500MB/s sequential reads and writes, nearly saturating the newly introduced 6Gbps SATA bus. It should be no surprise that OCZ is very interested in moving away from SATA.

We met OCZ’s first PCIe based SSD two years ago with the Indilinx based zDrive. Take four Indilinx Barefoot controllers, RAID them together on a PCIe card and you’ve got a zDrive. The first SandForce based PCIe solution actually took a step backwards: the OCZ RevoDrive only used two SandForce SF-1200 controllers.

Performance was of course much better than the old zDrive. SandForce has all but put Indilinx out of our minds (and systems). But the recently announced IBIS and the suspicious unpopulated connector on the original RevoDrive made it clear that there was room for more performance.

Meet the RevoDrive x2:

 

Identical to the original RevoDrive in every way, the x2 adds a second PCB complete with two more SF-1200 controllers. With a total of four SF-1200s on board, running in RAID-0, you should get IBIS-like performance without the HSDL interface.

 

The architecture remains unchanged. To save on costs OCZ uses a PCI-X based RAID controller: the Silicon Image 3124. The 3124 has four independent SATA ports, each one connects to a SF-1200 controller.

 

 

Between the Sil3124 and the PCIe x4 interface is a Pericom PCI-X to PCIe bridge. It converts the parallel PCI-X signaling into serial PCIe. The Sil3124 can deliver 1GB/s of bandwidth to the Pericom bridge, as can the 4 PCIe lanes (1GB/s in each direction) so there are no interface bottlenecks here. A quartet of SF-1200 controllers can’t realistically push more than 1GB/s of data.

As with all RAID enabled solutions, there’s no TRIM support but you do get idle garbage collection. 

There’s no performance advantage over you running four of your own SF-1200 based SSDs in RAID-0. The RevoDrive x2 is pretty much a four drive SF-1200 SSD on a stick for those who want simplicity. 

The previous RevoDrive was supposed to be cost competitive with a two drive RAID array. Today, looking at pricing, a 240GB RevoDrive sells for $519 while a pair of 120GB Vertex 2s will set you back $480. You pay a premium for the simplicity but performance is identical to rolling your own SSD RAID setup. 

The Test & Desktop Performance
POST A COMMENT

46 Comments

View All Comments

  • Chloiber - Thursday, November 4, 2010 - link

    IMHO the RevoDrives are useless products. You gain nothing except high sequential bandwith, which most users never need.
    In REAL world applications, the CPU limits anyway in high IOPS scenarios. You won't see a big gain (if any) if you move from 1 Vertex 2 to 4 Vertex 2 in typical situations.
    Reply
  • jonup - Thursday, November 4, 2010 - link

    Anand, this is not directly related with the article, but when do you expect the SSD prices to take a big hit? With the next generation of drives around the corner and talks of increased flash manufacturing capacities do you think it is reasonable to by and SSD (regardless of the interface) now simply from a $/GB prospective?

    Thanks,
    J
    Reply
  • theagentsmith - Thursday, November 4, 2010 - link

    Hey Anand
    could you shed light on a annoying bug that's plaguing several but not all owners of Sandforce based SSDs?
    It happens when there is not a lot of I/O activity, like when idle or light usage. The drive disappears and you see all the programs opened failing one at a time, until a couple of minutes later windows gives up with a BSOD. As the drive disappeared the kernel can't even write a memory dump, and if you press reset the drive isn't recognized by the BIOS, you have to cycle power to see it working again.
    There is also a resume from sleep bug that however it's tolerable as you can use hibernation instead of sleep.
    Here there is a topic on Corsair forums about this, they just released a 2.0 firmware but there is no change log and of course no word from Sandforce.
    http://forum.corsair.com/forums/showthread.php?t=8...
    Reply
  • mark53916 - Thursday, November 4, 2010 - link

    How do this and other SSDs handle the container files of encrypted
    and other virtual disks?

    Typically, for best performance the container files should be stored
    "densely" on the underlying device, but the space is always in
    use.
    Reply
  • Shadowmaster625 - Thursday, November 4, 2010 - link

    It is not on companies like OCZ to release a faster SSD controller. As I been saying for ages now, it is up to AMD/Intel to release an SSD controller integrated directly into the CPU. It makes as much sense as having an integrated memory controller. It's actually pretty much the same exact thing, except the memory is nonvolatile. It should be in the same form factor to reduce costs (keyed differently of course). ie, a 64GB SSD DIMM would cost half of a 64GB SSD. Perhaps even less. Reply
  • FunBunny2 - Thursday, November 4, 2010 - link

    The maths of SSD controllers aren't yet settled, which is why SandForce is different from Intel which is different from Indilinx and so forth. If the SSD controller is in the CPU case, you're stuck with it, unless you buy a new CPU. Hmmm. Frequent, planned, obsolescence; may be Intel will do it, then. Reply
  • larijoona - Thursday, November 4, 2010 - link

    Hello,

    I'm also intrested in seeing some benchmarks of virtual pc performance run from ssd!
    Reply
  • jhbodle - Thursday, November 4, 2010 - link

    I am not aware of any motherboard-integrated RAID controller that can handle the bandwidth of these 4 Sandforce SSDs. I use 3 X-25E's in RAID0 on the ICH10R, generally regarded as the best integrated RAID controller, and it is maxed at 660MB/sec.

    So I like this card and am pleased that companies such as OCZ are working on this kind of thing!
    Reply
  • Chloiber - Thursday, November 4, 2010 - link

    Yep. ICH10R gets to 500-650MB/s - never seen more.
    It may be, that the ICH10R is connected via a 2GB/s-Bus or whatever, but that's theory. Or, the controller itself cannot handle more (which is a reasonable explanation - if you look at controllers from areca, they also max out at 1-2GB/s - and they are way more expensive).
    Reply
  • sonofgodfrey - Thursday, November 4, 2010 - link

    The LSI SAS controllers (which are on some server boards) can easily hit 1GB/s with 4 SSDs. Did this with the first generation Intel X-25M drives. Reply

Log in

Don't have an account? Sign up now