Performance Consistency

Performance consistency tells us a lot about the architecture of these SSDs and how they handle internal fragmentation. The reason we do not have consistent IO latency with SSDs is because inevitably all controllers have to do some amount of defragmentation or garbage collection in order to continue operating at high speeds. When and how an SSD decides to run its defrag or cleanup routines directly impacts the user experience as inconsistent performance results in application slowdowns.

To test IO consistency, we fill a secure erased SSD with sequential data to ensure that all user accessible LBAs (Logical Block Addresses) have data associated with them. Next we kick off a 4KB random write workload across all LBAs at a queue depth of 32 using incompressible data. The test is run for just over half an hour and we record instantaneous IOPS every second.

We are also testing drives with added over-provisioning by limiting the LBA range. This gives us a look into the drive’s behavior with varying levels of empty space, which is frankly a more realistic approach for client workloads.

Each of the three graphs has its own purpose. The first one is of the whole duration of the test in log scale. The second and third one zoom into the beginning of steady-state operation (t=1400s) but on different scales: the second one uses log scale for easy comparison whereas the third one uses linear scale for better visualization of differences between drives. Click the dropdown selections below each graph to switch the source data.

For more detailed description of the test and why performance consistency matters, read our original Intel SSD DC S3700 article.

Mushkin Reactor 1TB
Default
25% Over-Provisioning

Despite the use of newer and slightly lower performance 16nm NAND, Reactor's performance consistency is actually marginally better than the other SM2246EN based SSDs we have tested. It's still worse than most of the other drives, but at least the increase in capacity didn't negatively impact the consistency, which happens with some drives. 

Transcend SSD370 256GB
Default
25% Over-Provisioning

 

Transcend SSD370 256GB
Default
25% Over-Provisioning


TRIM Validation

To test TRIM, I filled the drive with sequential 128KB data and proceeded with a 30-minute random 4KB write (QD32) workload to put the drive into steady-state. After that I TRIM'ed the drive by issuing a quick format in Windows and ran HD Tach to produce the graph below.

And TRIM works as expected.

Introduction, The Drive & The Test AnandTech Storage Bench 2013
POST A COMMENT

69 Comments

View All Comments

  • prime2515103 - Monday, February 9, 2015 - link

    Is it just me or are SSD review getting really boring? Every time I see a new one I think, "Maybe something new and exciting this time..." but it never happens. I think SATA needs to be put to rest. Reply
  • piroroadkill - Monday, February 9, 2015 - link

    Yeah, SATA3 is making everything boring as hell now. Reply
  • ddriver - Monday, February 9, 2015 - link

    That's a limiting factor only on sequential access. There is still huge potential to be harnessed for random access, but nobody seems to be in a hurry to boost IOPS. Reply
  • Kristian Vättö - Monday, February 9, 2015 - link

    SATA, or more accurately AHCI, is the limit when it comes to IOPS/latency. Reply
  • cm2187 - Friday, February 13, 2015 - link

    I can only talk for myself but personally I could use more size than speed. There is very little of what I do that would give me a different experience at twice the speed of the current SSD specs. But give me a 4TB SSD as cheap as 6TB HDD are today and now I can replace all these spinning disks. Reply
  • 0ldman79 - Wednesday, March 4, 2015 - link

    Agreed.

    I might keep a couple of mechanical drives, but I'd love for the price to be closer to the mechanical drives for the capacity.

    Too bad that's not the way our market works in much of anything these days.
    Reply
  • Solandri - Monday, February 9, 2015 - link

    PCIe actually doesn't make that big a difference. Your perception of how fast/slow things are is in terms of seconds you have to wait. These benchmarks are in MB/s which is the inverse of your perception. If you plot these benchmarks correctly in sec/MB, all these SSDs are pretty much the same, and the PCIe SSDs only give you a small fraction of the speedup you got going from SATA2 to SATA3. e.g. Imagine you need to read 1000 MB.

    10 sec = 100 MB/s HDD
    4 sec = 250 MB/s SATA2 SSD (6 sec improvement)
    2 sec = 500 MB/s SATA 3 SSD (2 sec improvement)
    1.25 sec = 800 MB/s PCIe SSD (0.75 sec improvement)
    Reply
  • nathanddrews - Monday, February 9, 2015 - link

    This is very true, but doesn't make me want it less. :-D

    What kills me is the lack of "affordable" 2TB+ drives. How is that we go from $400 for 1TB in a 2.5" drive to $1,500-$4,000 for 2TB? I expected that all these die shrinks and 3D technologies would have made 2TB+ SSDs possible in the ~$700-$900 space, but there's nothing to buy! FFS, what gives?
    Reply
  • DanNeely - Monday, February 9, 2015 - link

    It's a giant game of chicken, and no one wants to be the first to kick over the enterprisy pricing gravy train. We saw the same thing a few years ago when 512TB drives started at $350 but the cheapest 1TB ones were well north of $1k.

    At the risk of sounding overly cynical; I suspect the first vendor to blink will be whoever is first to either get the higher nand density or the 32 chip controller needed to make a 4TB flash drive in a 2.5" form factor.
    Reply
  • Cogman - Monday, February 9, 2015 - link

    Mostly it comes down to demand. Nobody is really demanding 2TB SSD drives. As a result, there is little competition and little incentive to make a $800 drive (even though it is totally feasible). Reply

Log in

Don't have an account? Sign up now