Performance Consistency

In our Intel SSD DC S3700 review I introduced a new method of characterizing performance: looking at the latency of individual operations over time. The S3700 promised a level of performance consistency that was unmatched in the industry, and as a result needed some additional testing to show that. The reason we don't have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag and cleanup routines directly impacts the user experience. Frequent (borderline aggressive) cleanup generally results in more stable performance, while delaying that can result in higher peak performance at the expense of much lower worst case performance. The graphs below tell us a lot about the architecture of these SSDs and how they handle internal defragmentation.

To generate the data below I took a freshly secure erased SSD and filled it with sequential data. This ensures that all user accessible LBAs have data associated with them. Next I kicked off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. I ran the test for just over half an hour, no where near what we run our steady state tests for but enough to give me a good look at drive behavior once all spare area filled up.

I recorded instantaneous IOPS every second for the duration of the test. I then plotted IOPS vs. time and generated the scatter plots below. Each set of graphs features the same scale. The first two sets use a log scale for easy comparison, while the last set of graphs uses a linear scale that tops out at 50K IOPS for better visualization of differences between drives.

The high level testing methodology remains unchanged from our S3700 review. Unlike in previous reviews however, I did vary the percentage of the drive that I filled/tested depending on the amount of spare area I was trying to simulate. The buttons are labeled with the advertised user capacity had the SSD vendor decided to use that specific amount of spare area. If you want to replicate this on your own all you need to do is create a partition smaller than the total capacity of the drive and leave the remaining space unused to simulate a larger amount of spare area. The partitioning step isn't absolutely necessary in every case but it's an easy way to make sure you never exceed your allocated spare area. It's a good idea to do this from the start (e.g. secure erase, partition, then install Windows), but if you are working backwards you can always create the spare area partition, format it to TRIM it, then delete the partition. Finally, this method of creating spare area works on the drives we've tested here but not all controllers may behave the same way.

The first set of graphs shows the performance data over the entire 2000 second test period. In these charts you'll notice an early period of very high performance followed by a sharp dropoff. What you're seeing in that case is the drive allocating new blocks from its spare area, then eventually using up all free blocks and having to perform a read-modify-write for all subsequent writes (write amplification goes up, performance goes down).

The second set of graphs zooms in to the beginning of steady state operation for the drive (t=1400s). The third set also looks at the beginning of steady state operation but on a linear performance scale. Click the buttons below each graph to switch source data.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
25% Spare Area  

Um, hello, awesome? The SanDisk Extreme II is the first Marvell based consumer SSD to actually prioritize performance consistency. The Extreme II does significantly better than pretty much every other drive here with the exception of Corsair's Neutron. Note that increasing the amount of spare area on the drive actually reduces IO consistency, at least during the short duration of this test, as SanDisk's firmware aggressively attempts to improve the overall performance of the drive. Either way this is the first SSD from a big OEM supplier that actually delivers consistent performance in the worst case scenario.

  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
25% Spare Area  


  Corsair Neutron 240GB Crucial M500 960GB Samsung SSD 840 Pro 256GB SanDisk Extreme II 480GB Seagate 600 480GB
25% Spare Area  


Introduction AnandTech Storage Bench 2013
Comments Locked


View All Comments

  • klmccaughey - Wednesday, June 5, 2013 - link

    Hey, as one of these here "Coders" I can tell you my bread and butter is a ratio of 10:1 on thinking to coding ;) I suspect most programmers are similar.
  • tipoo - Monday, June 3, 2013 - link

    But in a sense Tukano is right, the SATA 3 standard can already be saturated by the fastest SSDs, so the connections between components are indeed the bottleneck. Most SSDs are still getting there, but the standard was saturated by the best almost as soon as it became widespread. They need a much bigger hop next time to leave some headroom.
  • A5 - Monday, June 3, 2013 - link

    The first round of SATA Express will give 16 Gbps for standard drives and up to 32 Gbps for mPCIe-style cards (used to be known as NGFF). I think we'll see a cool round of enthusiast drives once NGFF is finalized.
  • althaz - Tuesday, June 4, 2013 - link

    Storage is almost always the bottleneck. Faster storage = faster data moving around your PC's various subsystems. It's always better. You are certainly not likely to actually notice the incremental improvements from drive to the next, but it's important that these improvements are made, because you sure as hell WILL notice upgrading from something 5-6 generations different.

    What causes your PC to boot in 30 seconds is a combination of a lot of things, but seeing as mine boots in much closer to 5 seconds, I suspect you must be running a Windows 7 without a really fast SSD (I'm running 8 with an Intel 240Gb 520 series drive).
  • sna1970 - Tuesday, June 4, 2013 - link

    not really.

    Storage is never a bottle neck . if you have enough memory , they will load once to the memory and thats it.

    you need to eliminate the need to read the same data again thats all.

    try to max your memory to 32G or 64 G , and make a 24G Ramdisk and install the application you want there. you will have instant running programs. there is no real bottlenecks.
  • kevith - Wednesday, June 5, 2013 - link

    "Closer to 5 seconds".... From what point do you start counting...?
  • seapeople - Wednesday, June 5, 2013 - link

    Probably after he logs in.
  • compvter - Friday, July 19, 2013 - link

    5 seconds would be very fast, i get to windows desktop in w8 in 11 seconds. Calculated from pressing the power button on my laptop and stopped when i get to real desktop (not metro). I have older samsung 830 and first generation i7 cpu and 16gb memory.
  • ickibar1234 - Friday, December 20, 2013 - link

    After getting an SSD with a SATA 3 computer, it's mostly likely driver initialization, timers and stuff like that that is the bottleneck during bootup.
  • Occas - Tuesday, June 4, 2013 - link

    Regarding PC Boot time, easily for me it was my motherboard post time.

    My old Asus took minimum 20 seconds to post! When I bought my new system I researched post times and ended up with an ASRock which posts in about 5 seconds. Boom, now I can barely sit down before I'm ready to log in. :)

Log in

Don't have an account? Sign up now