The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
The Cleaning Lady and Write Amplification
Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.
You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.
Pretty basic, right? That’s how an SSD works.
Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.
All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.
Remember this picture?
It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.
In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.
IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:
The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.
We can actually see this in action if we look at write latencies:
Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:
While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.
And this is where write amplification comes in.
In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.
Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely
The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.
295 Comments
View All Comments
Mr Perfect - Tuesday, September 1, 2009 - link
Probably demand. When I saw that price, I shopped around to see what was going on. Answer? Everyone else seems to be out of stock.Naccah - Tuesday, September 1, 2009 - link
I've been waiting to get an SSD till Win 7 released hoping that the prices would have stabilized somewhat by that time. The recent price fluctuation is disturbing as well as the availability of the X25 G2. When the G2 first hit Newegg I was surfing the site and could have grabbed one for $230, but like I said I was content to wait. Now I'm having second thoughts! and wondering if I should grab one if the price goes down again.gfody - Tuesday, September 1, 2009 - link
That doesn't explain the 160gb - it's not even in stock yet. I have been waiting a month for this drive to be in stock and here they more than double the price one day before the ETA date! It's an outrage.. if I'd known the drive was $1000 I would have bought something else.Way to screw your customers Newegg
araczynski - Tuesday, September 1, 2009 - link
A) your intro has the familiar smell of tomshardware, you'd do to be without that, its unbecoming.B) your final words smell of the typical big corp establishment mentality; bigger, faster, more expensive, consumers want! while if the market is any indication, is completely the opposite of the truth. people want 'good enough' for cheap, as the recent Wired magazine article more or less said. granted, Wired isn't the source for indepth technical reading, but it is a good source sometimes of getting the pulse of things...sometimes, still, more often than anything coming out of the mouths of the big corps.
C) everything in between A and B is great though :) Please leave the opinions/spins to the PR machines.
Personally, the cost of these things is still more than i'm willing to pay for, for any speed increase. the idiotic shenanigans of firmwares and features only present after special downloads/phases of the moon make me just blow off the whole technology for a few more years. I'll revisit this in say 2 or 3 years, perhaps the MLC's will finally die off and the SLC's (unless i have the 2 backwards) or something better rolls out with a longer lifespan.
Anand Lal Shimpi - Tuesday, September 1, 2009 - link
A) My intention with the intro was to convey how difficult it was for me to even get to the point where I felt remotely comfortable publishing this article. I don't like posting something that I don't feel is worthy of the readership's reception. My sincere apologies if it came off as arrogant or anything other than an honest expression of how difficult it was to complete. I was simply trying to bring you all behind the scenes and take you into the crazy place that's my mind for a bit :)B) I agree that good enough for cheap is important, hence my Indilinx recommendation at the end. But we can't stifle innovation. We need bigger, better, faster (but not necessarily more expensive, thank you Moore's Law) to keep improving. I remember when the P3 hit 1GHz and everyone said we don't need faster CPUs. If we stopped back then we wouldn't have the apps/web we have today since developers can count on a large install base of very fast processors.
Imagine what happens in another decade when everyone has many-core CPUs in their notebooks...
Take care,
Anand
DynacomDave - Tuesday, September 29, 2009 - link
First - Anand thanks for the good work and the great article.I too have an older laptop that has a PATA interface that I'd like to upgrade with an SSD. I contacted Super Talent about their MasterDrive EX2 - IDE/PATA. Their response was; We only use Indilinx controller for SATA drives, like UltraDrive series. We use Phison controller for EX2/IDE drives.
I want to improve performance not degrade it. I don't know if this will perform like the Indilinx or like the old SSDs. Can anyone help me with this?
bji - Tuesday, September 1, 2009 - link
There are a few more smaller players in the SSD controller game that don't ever show up in these reviews. They are Silicon Motion and Mtron. The reason I am interested in them is because I have a laptop that is PATA only (it's old I know but I love it and I want to extend its life with an SSD), and I am trying to get an SSD that works in it.Turns out the Mtron MOBI SSDs are not compatible with this laptop. I have no idea why. So I have put an order into eBay for an SSDFactory SSD and am crossing my fingers that it will work.
Mtron makes SATA SSD drives so they could be included in these reviews, and I don't know why they are excluded. It would be interesting to see how their controllers stack up. I personally own two Mtron SSD drives (both 32 GB SLC drives) that I tried to get to work in my laptop and failed to - so one is now the system disk in my desktop and it is very fast (at least compared to platter drives, maybe not compared to newer SSDs). The other one I am still trying to find a use for.
The only Silicon Motion controller drives I have seen are PATA drives so they clearly are a different beast than the SATA drives typically reviewed in these articles. But I would still be interested in seeing the numbers for the Silicon Motion controller just to get an idea of how well they stack up against the other controllers, especially for the 4K random writes tests. The PATA interface ought not to be the limiting factor for that test at least.
paesan - Tuesday, September 1, 2009 - link
I see NewEgg has a Patriot Troqx and a Patriot Torqx M28. What is the difference in the 2 drives.paesan - Tuesday, September 1, 2009 - link
After reading thru the Patriot forum I found the differences. The M28 has 128MB cache compared to 64MB cache on the non M28. The biggest difference is the M28 uses a Samsung controller instead of the Indilinx controller on the non M28. I wonder why they switched controllers.valnar - Tuesday, September 1, 2009 - link
It seems to be that using trim would make a "used" SSD faster, no doubt, but is it required? Would it be okay to buy an SSD for a Windows XP box and just set and forget it? Even used and fragmented, it appears to be faster than any hard drive. My second question is longevity. How long would one last compared to a hard drive?