The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
The Cleaning Lady and Write Amplification
Imagine you’re running a cafeteria. This is the real world and your cafeteria has a finite number of plates, say 200 for the entire cafeteria. Your cafeteria is open for dinner and over the course of the night you may serve a total of 1000 people. The number of guests outnumbers the total number of plates 5-to-1, thankfully they don’t all eat at once.
You’ve got a dishwasher who cleans the dirty dishes as the tables are bussed and then puts them in a pile of clean dishes for the servers to use as new diners arrive.
Pretty basic, right? That’s how an SSD works.
Remember the rules: you can read from and write to pages, but you must erase entire blocks at a time. If a block is full of invalid pages (files that have been overwritten at the file system level for example), it must be erased before it can be written to.
All SSDs have a dishwasher of sorts, except instead of cleaning dishes, its job is to clean NAND blocks and prep them for use. The cleaning algorithms don’t really kick in when the drive is new, but put a few days, weeks or months of use on the drive and cleaning will become a regular part of its routine.
Remember this picture?
It (roughly) describes what happens when you go to write a page of data to a block that’s full of both valid and invalid pages.
In actuality the write happens more like this. A new block is allocated, valid data is copied to the new block (including the data you wish to write), the old block is sent for cleaning and emerges completely wiped. The old block is added to the pool of empty blocks. As the controller needs them, blocks are pulled from this pool, used, and the old blocks are recycled in here.
IBM's Zurich Research Laboratory actually made a wonderful diagram of how this works, but it's a bit more complicated than I need it to be for my example here today so I've remade the diagram and simplified it a bit:
The diagram explains what I just outlined above. A write request comes in, a new block is allocated and used then added to the list of used blocks. The blocks with the least amount of valid data (or the most invalid data) are scheduled for garbage collection, cleaned and added to the free block pool.
We can actually see this in action if we look at write latencies:
Average write latencies for writing to an SSD, even with random data, are extremely low. But take a look at the max latencies:
While average latencies are very low, the max latencies are around 350x higher. They are still low compared to a mechanical hard disk, but what's going on to make the max latency so high? All of the cleaning and reorganization I've been talking about. It rarely makes a noticeable impact on performance (hence the ultra low average latencies), but this is an example of happening.
And this is where write amplification comes in.
In the diagram above we see another angle on what happens when a write comes in. A free block is used (when available) for the incoming write. That's not the only write that happens however, eventually you have to perform some garbage collection so you don't run out of free blocks. The block with the most invalid data is selected for cleaning; its data is copied to another block, after which the previous block is erased and added to the free block pool. In the diagram above you'll see the size of our write request on the left, but on the very right you'll see how much data was actually written when you take into account garbage collection. This inequality is called write amplification.
Intel claims very low write amplification on its drives, although over the lifespan of your drive a < 1.1 factor seems highly unlikely
The write amplification factor is the amount of data the SSD controller has to write in relation to the amount of data that the host controller wants to write. A write amplification factor of 1 is perfect, it means you wanted to write 1MB and the SSD’s controller wrote 1MB. A write amplification factor greater than 1 isn't desirable, but an unfortunate fact of life. The higher your write amplification, the quicker your drive will die and the lower its performance will be. Write amplification, bad.
295 Comments
View All Comments
Jedi2155 - Monday, August 31, 2009 - link
Anandtech has always been known for its in-depth analysis, you're just looking for a simple review list. I much prefer these detailed articles than just hearing the list of performance and simple recommendations that most people can write if provided the proper hardware.I love how Anand always writes excellent, very well detailed articles that are still SIMPLE to understand. A number of other sites may offer some similar levels of detailed but are sometimes a bit too difficult to comprehend without a background in the same field.
KommisMar - Sunday, April 4, 2010 - link
Anand,I read your long series of articles on SSDs today, and just wanted to say thanks for writing the most informative and interesting series of tech articles I've read in years. I've been avoiding SSDs because my first experience with one was horrible. The sustained transfer rates were no better than a traditional hard drive, and the system halting for several seconds on each random write operation was too much for me to stand.
I was so sick of the SSD coverage that I was reading on other websites because none of them seemed to answer my biggest question, which was "Which SSD won't bring my system to a screeching halt every time it needs to write a little data?"
Thanks for answering that question and explaining what to look for and what to avoid. It sounds like it's a good time for me to give SSDs another shot.
jamesy - Thursday, April 22, 2010 - link
That about sums it up: disappointment. Although this was a top-caliber SSD article, like i have come to love and expect out of anand, this article didn't make my buying decision any easier as all. In fact, it might have made it more complicated.I understand Intel, Indillinx, and Sandforce are good, but there are so many drives out there, and most suck. This article was amazing by most standards but the headline should be changed: removing the "Choosing the Best SSD."
Maybe "Choosing the right controller before sorting through a hundred drives" would be an appropriate replacement.
Do i still go with the intel 160 X-25m G2?
Do I get the addon Sata 6g card and get the C300?
Do i save the money, and get an indillinx drive? Is the extra money worth the Intel/C300 drive?
These are the main questions enthusiasts have, and while this article contained a great overview of the market in Q3 2009, SSD Tech has progressed dramatically. Only now, i think, are we getting to the point that we could publish a buying guide and have it last a few months.
I trust Anandtech, i just wish they would flat-out make a buying guide, assign points in different categories (points for sequential read/write, points for random read/write, points for real-life performance or perceived performance, points for reliability, and points for price.). Take all of these points, add em up, and make a table pls.
A few graphs can help, but the 200 included in each article is overwhelming, and does nothing to de-complicate or make me confindent in my purchase.
It's great to know how drives score, how they perform. But it's even important to know that you bought the right drive.
mudslinger - Monday, June 28, 2010 - link
This article is dated 8/30/2009!!!!It’s ancient history
Since then newer, faster SSD’s have been introduced to the market.
And their firmware have all been updated to address known past issues.
This article is completely irrelevant and should be taken down or updated.
I’m constantly amazed at how old trash info is left lingering about the web for search engines like Google to find. Just because Google lists an article doesn’t make it legit.
cklein - Monday, July 12, 2010 - link
Actually I am trying to find a reason to use SSD.1. Server Environment
No matter it's a webserver or a SQL server, I don't see a way we can use SSD. My SERVER comes with plenty of RAM 32G or 64G. The OS/start a little bit slow, but it's OK, since it never stop after it's started. And everything is loaded into RAM, no page file usage is needed. So, really why do we need SSD here to boost the OS start time or application start time?
For SQL server database, that's even worse. Let's say I have a 10GB SQL server database, and it grows to 50GB after a year. Can you image how many random writes, updates between the process? I am not quite sure this will wear off the SSD really quick.
2. For desktop / laptop, I can probably say, install the OS and applications on SSD, and leave everything on other drives? And even create page file on other drives? As I feel SSD is only good for readonly access. For frequent write, it may wear off pretty quick? I am doing development, I am not even sure I should save source code on SSD, as it compiles, builds, I am sure it writes a lot to SSD.
So over all, I don't see it fits in Server environment, but for desktop/laptop, maybe? even so, it's limited?
someone correct me if I am wrong?
TCQU - Thursday, July 29, 2010 - link
Hi peopleI'm up for getting a new Macbook pro with ssd.
BUT i heard somthing about, that the 128gb ssd, for apples machines, was made by samsung. I was ready to buy it, but now that i've heard that first of all "apples" ssd's is much slower that they others on the marked. Then i read this. So now i'm really confused.
What shoud i do?
buy apples macbook pro with 128gb ssd
or should i buy it without and replace it with an other ssd? thoughts? plzz help me out
thanks
Thomas
TCQU - Thursday, July 29, 2010 - link
Hi peopleI'm up for getting a new Macbook pro with ssd.
BUT i heard somthing about, that the 128gb ssd, for apples machines, was made by samsung. I was ready to buy it, but now that i've heard that first of all "apples" ssd's is much slower that they others on the marked. Then i read this. So now i'm really confused.
What shoud i do?
buy apples macbook pro with 128gb ssd
or should i buy it without and replace it with an other ssd? thoughts? plzz help me out
thanks
Thomas
marraco - Friday, August 13, 2010 - link
Why Sandforce controllers are ignored?I’m extremely disappointed with the compiler benchmark. Please test .NET (With lot of classes source files and dependencies). It seems like nothing speeds up compilation. No CPU, no memory, no SSD. It makes nonsense.
sylvm - Thursday, October 7, 2010 - link
I found this article of very good quality.I was looking for a similar article about express card SSDs using PCIe port, but found nothing about their performance for rewrite.
The best I found is this review http://www.pro-clockers.com/storage/192-wintec-fil... saying nothing about it.
Expresscard SSDs would allow good speed improvement/price compromise : buying a relatively small and cheap one for OS and softwares, while keeping the HDD for data.
Has anyone some info about it ?
Best regards,
Sylvain
paulgj - Saturday, October 9, 2010 - link
Well I was curious about the flash in my Agility 60GB so I opened it up and noted a different Intel part number - mine consisted of 8 x 29F64G08CAMDB chips whereas the pic above shows the 29F64G08FAMCI. I wonder what the difference is?-Paul