OCZ has been teasing the Vector 180 for quite some time now. The first hint of the drive was unveiled over nine months ago at Computex 2014 where OCZ displayed a Vector SSD with power loss protection, but the concept of 'full power loss protection for the enterprise segment' as it existed back then never made it to the market. Instead, OCZ decided to partially use the concept and apply it to its new flagship client drive that is also known as the Vector 180.

OCZ calls the power loss protection feature in Vector 180 'Power Failure Management Plus', or PFM+ for short. For cost reasons, OCZ didn't go with full power loss protection similar to enterprise SSDs and hence PFM+ is limited to offering protection for data-at-rest. In other words, PFM+ will protect data that has already been written to the NAND, but any and all user data that still sits in the DRAM buffer waiting to be written will be lost in case of a sudden power loss. 

The purpose of PFM+ is to protect the mapping table and reduce the risk of bricking due to a sudden power loss. Since the mapping table is stored in the DRAM for faster access, all SSDs without some sort of power loss protection are inherently vulnerable to mapping table corruption in case of a sudden power loss. In its other SSDs OCZ tries to protect the mapping table by frequently flushing the table from DRAM to NAND, but with higher capacities (like the 960GB) there's more metadata involved and thus more data at risk, which is why OCZ is introducing PFM+ to the Vector 180.

That said, while drive bricking due to mapping table corruption has always been a concern, I don't think it has been significant enough to warrant physical power loss protection for all client SSDs. It makes sense for the Vector 180 given it's high-end focus as professional users are less tolerant to downtime and it also grants OCZ some differentiation in the highly competitive client market. 

Aside from PFM+, the other new thing OCZ is bringing to the market with the Vector 180 is a 960GB model. The higher capacity is enabled by the use of 128Gbit NAND, whereas in the past OCZ has only used a 64Gbit die in its products. It seems that Toshiba's switch to 128Gbit die has been rather slow as I have not seen too many products with 128Gbit Toshiba NAND - perhaps there have been some yield issues or maybe Toshiba's partners are just more willing to use the 64Gbit die for performance reasons (you always lose some performance with a higher capacity die due to reduced parallelism).

OCZ Vector 180 Specifications
Capacity 120GB 240GB 480GB 960GB
Controller OCZ Barefoot 3 M00
NAND Toshiba A19nm MLC
NAND Density 64Gbit per Die 128Gbit per Die
DRAM Cache 512MB 1GB
Sequential Read 550MB/s 550MB/s 550MB/s 550MB/s
Sequential Write 450MB/s 530MB/s 530MB/s 530MB/s
4KB Random Read 85K IOPS 95K IOPS 100K IOPS 100K IOPS
4KB Random Write 90K IOPS 90K IOPS 95K IOPS 95K IOPS
Steady-State 4KB Random Write 12K IOPS 20K IOPS 23K IOPS 20K IOPS
Idle Power 0.85W
Max Power 3.7W
Encryption AES-256
Endurance 50GB/day for 5 years
Warranty Five years
MSRP $90 $150 $275 $500

The retail package includes a 3.5" desktop adapter and a license for Acronis True Image HD 2013 cloning software. Like some of OCZ's recent SSDs, the Vector 180 includes a 5-year ShieldPlus Warranty.

OCZ has two flavors of the Barefoot 3 controller and obviously the Vector 180 is using the faster M00 bin, which runs at 397MHz (whereas the M10 as used in the ARC 100 and Vertex 460(a) is clocked at 352MHz). 

OCZ's other SSDs have already made the switch to Toshiba's latest A19nm MLC and with the Vector 180 the Vector series is the last one to make that jump. Given that the Vector lineup is OCZ's SATA 6Gbps flagship, it makes sense since NAND endurance and performance tend to increase as the process matures.

The Vector 180 review is the second that is based on our new 2015 SSD Suite and I suggest that you read the introduction article (i.e. the Samsung SM951 review) to get the full details. Due to several NDAs and travel, I unfortunately don't have too many drives as comparison points yet, but I'm running tests non-stop to add more drives for more accurate conclusions.

AnandTech 2015 SSD Test System
CPU Intel Core i7-4770K running at 3.5GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z97 Deluxe (BIOS 2205)
Chipset Intel Z97
Chipset Drivers Intel 10.0.24+ Intel RST 13.2.4.1000
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 8.1 x64
SSD Guru: The New OCZ Toolbox
POST A COMMENT

89 Comments

View All Comments

  • nathanddrews - Tuesday, March 24, 2015 - link

    This exactly. LOL Reply
  • Samus - Wednesday, March 25, 2015 - link

    Isn't it a crime to put Samsung and support in the same sentence? That companies Achilles heal is complete lack of support. Look at all the people with GalaxyS3's and smart tv's that were left out to dry the moment next gen models came out. And on a polarizingly opposite end of the spectrum is Apple who still supports the nearly 4 year old iPhone 4S. I'm no Apple fan but that is commendable and something all companies should pay attention too. Customer support pays off. Reply
  • Oxford Guy - Wednesday, March 25, 2015 - link

    Apple did a shit job with the white Core Duo iMacs which all develop bad pixel lines. We had fourteen in a lab and all of them developed the problem. Apple also dropped the ball on people with the 8600 GT and similar Nvidia GPUs in their Macbook Pros by refusing to replace the defective GPUs with anything other than new defective GPUs. Both, as far as I know, caused class-action lawsuits. Reply
  • Oxford Guy - Wednesday, March 25, 2015 - link

    I forgot to mention that not only did Apple not actually fix the problem with those bad GPUs, customers would have to jump through a bunch of hoops like bringing their machines to an Apple Store so someone there could decide if they qualify or not for a replacement defective GPU. Reply
  • matt.vanmater - Tuesday, March 24, 2015 - link

    I am curious, does the drive return a write IO as complete as soon as it is stored in the DRAM?

    If so, this drive could be fantastic to use as a ZFS ZIL.

    Think of it this way: you partition it so the size does not exceed the DRAM size (e.g. 512MB), and use that partition as ZIL. The small partition size guarantees that any writes to the drive fit in DRAM, and the PFM guarantees there is no loss. This is similar in concept to short-stroking hard drives with a spinning platter.

    For those of you that don't know, ZFS performance is significantly enhanced by the existence of a ZIL device with very low latency (and DRAM on board this drive should fit that bill). A fast ZIL is particularly important for people who use NFS as a datastore for VMWare. This is because VMWare forces NFS to Sync write IOs, even if your ZFS config is to not require sync. This device may or may not perform as well as a DDRDRIVE (ddrdrive.com) but it comes in at about 1/20th the price so it is a very promising idea!

    ocztosh -- has your team considered the use of this device as a ZFS array ZIL device like i describe above?
    Reply
  • Kristian Vättö - Tuesday, March 24, 2015 - link

    PFM+ is limited to protecting the NAND mapping table, so any user data will still be lost in case of a sudden power loss. Hence the Vector 180 isn't really suitable for the scenario you described. Reply
  • matt.vanmater - Wednesday, March 25, 2015 - link

    OK good to know. To be honest though, what matters more in this scenario (for me) is if the device returns a write io as successful immediately when it is stored in DRAM, or if it waits until it is stored in flash.

    As nils_ mentions below, a UPS is another way of partially mitigating a power failure. In my case, the battery backup is a nice to have rather than a must have.
    Reply
  • matt.vanmater - Tuesday, March 24, 2015 - link

    One minor addition... OCZ was clearly thinking about ZFS ZIL devices when they announced prototype devices called "Aeon" about 2 years ago. They even blogged about this use case:
    http://eblog.ocz.com/ssd-powered-clouds-times-chan...

    Unfortunately OCZ never brought these drives to market (I wish they did!) so we're stuck waiting for a consumer DRAM device that isn't 10+ year old technology or $2k+ in price tag.
    Reply
  • nils_ - Wednesday, March 25, 2015 - link

    Something like the PMC Flashtec devices? Those are boards with 4-16GiB of DRAM backed by the same size of flash chips and capacitors with a NVMe interface. If the system loses power the DRAM is flushed to flash and restored when the power is back on. This is great for things like ZIL, Journals, doublewrite buffer (like in MySQL/MariaDB), ceph journals etc..

    And before it comes up, a UPS can fail too (I've seen it happen more often than I'd like to count).
    Reply
  • matt.vanmater - Wednesday, March 25, 2015 - link

    I saw those PMC Flashtec devices as well and they look promising, but I don't see any for sale yet. Hopefully they don't become vaporware like OCZ Aeon drives.

    Also, in my opinion I prefer a SATAIII or SAS interface over PCI-e, because (in theory) a SATA/SAS device will work in almost any motherboard on any Operating System without special drivers, whereas PCI-e devices will need special device drivers for each OS. Obviously, waiting for drivers to be created limits which systems a device can be used in.

    True PCI-e will definitely have greater throughput than SATA/SAS, but the ZFS ZIL use case needs very low latency and NOT necessarily high throughput. I haven't seen any data indicating that PCI-e is any better/worse on IO latency than SATA/SAS.
    Reply

Log in

Don't have an account? Sign up now