The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Used vs. New Performance: Revisited
Nearly all good SSDs perform le sweet when brand new. None of the blocks have any data in them, each write is performed at full speed, all is bueno. Over time, your drive gets written to, all blocks get occupied with data (both valid and invalid) and now every time you write to the SSD its controller has to do that painful read modify write and cleaning.
In the Anthology I simulated this worst used case by first filling the drive with data, deleting the partition, then installing the OS and running my benchmarks. This worked very well because it filled every single flash block with data. The OS installation and actual testing added a few sprinkles of randomness that helped make the scenario even more strenuous, which I liked.
The problem here is that if a drive properly supports TRIM, the act of formatting the drive will erase all of the wonderful used data I purposefully filled the drive with. My “used” case on a drive supporting TRIM will now just be like testing a drive in a brand new state.
To prove this point I provide you with an example of what happens when you take a drive supporting TRIM, fill it with data and then format the drive:
SuperTalent UltraDrive GX 1711 | 4KB Random Write IOPS |
Clean Drive | 13.1 MB/s |
Used Drive | 6.93 MB/s |
Used Drive After TRIM | 12.9 MB/s |
Oh look, performance doesn’t really change. The cleaning process takes longer now but other than that, the performance is the same.
So, I need a new way to test. It’s a shame because I’m particularly attached to the old way I tested, mostly because it provides a very stressful situation for the drives to deal with. After all, I don’t want to fool anyone into thinking a drive is faster than it is.
Once TRIM is enabled on all drives, the way I will test is by filling a drive after it’s been graced with an OS. I will fill it with both valid and invalid data, delete the invalid data and measure performance. This will measure how well the drive performs closer to capacity as well as how well it can TRIM data.
Unfortunately, no drives properly support TRIM yet. The beta Indilinx firmware with TRIM support works well, unless you put your system to sleep. Then there’s a chance you might lose your data. Woops. There’s also the problem with Intel’s Matrix Storage Manager not passing TRIM to your drives. All of this will get fixed before the end of the year, but it’s just a bit too early to get TRIM happy.
What we get today is the first stage of migrating the way we test. In order to simulate a real user environment I take a freshly secure erased drive, install Windows 7 x64 on it (no cloning, full install this time), then install drivers/apps, then fill the remaining space on the drive and delete it. This fills the drive with invalid data that the drive must keep track of and juggle, much like what you'd see by simply using your system.
I’m using the latest IMSM driver so TRIM doesn’t get passed to the drives; I’m such a jerk to these poor SSDs.
I’ll start look at both new and used performance on the coming pages. Once TRIM gets here in full force I’ll just start using it and we won't have to worry about looking at new vs. used performance.
The Test
CPU | Intel Core i7 965 running at 3.2GHz (Turbo & EIST Disabled) |
Motherboard: | Intel DX58SO (Intel X58) |
Chipset: | Intel X58 |
Chipset Drivers: | Intel 9.1.1.1015 + Intel IMSM 8.9 |
Memory: | Qimonda DDR3-1066 4 x 1GB (7-7-7-20) |
Video Card: | eVGA GeForce GTX 285 |
Video Drivers: | NVIDIA ForceWare 190.38 64-bit |
Desktop Resolution: | 1920 x 1200 |
OS: | Windows 7 x64 |
295 Comments
View All Comments
GourdFreeMan - Tuesday, September 1, 2009 - link
Yes, rewriting a cell will refill the floating gate with trapped electrons to the proper voltage level unless the gate has begun to wear out, so backing up your data, secure erasing your drive and copying the data back will preserve the life (within reason) of even drives that use minimalistic wear leveling to safeguard data. Charge retention is only a problem for users if they intend to use the drive for archival storage, or operate the drive at highly elevated temperatures.It is a bigger problem for flash engineers, however, and one of the reasons why MLC cannot be moved easily to more bits per cell without design changes. To store n-bits in a single cell you need 2^n separate energy levels to represent them, and thus each bit is only has approximately 1/(2^(n-1)) the amount of energy difference between states when compared to SLC using similar designs and materials.
Zheos - Tuesday, September 1, 2009 - link
Man you seem to know a lot about what you're talking about :)Yeah now i understand why SSD for database and file storage server would be quite a bad idea.
But for personal windows & everyday application storage, seems like a pure win to me if you can afford one :)
I was only worried about its life-span but thankx to you and you're quick replys (and for the maths and technical stuff about how it realy work ;) im sold on the fact that i will buy one soon.
The G2 from Intel seems like the best choice for now but I'll just wait and see how it's going when TRIM will become almost enable on every SSD and i'll make my decision there in a couple of months =)
GourdFreeMan - Wednesday, September 2, 2009 - link
It isn't so much that SSDs make a bad storage server, but rather that you can't neglect to make periodic backups, as with any type of storage, if your data has great monetary or sentimental value. In addition to backups, RAID (1-6) is also an option if cost is no object and you want to use SSDs for long term storage in a running server. Database servers are a little more complicated, but SSDs can be an intelligent choice there as well if your usage patterns aren't continuous heavy small (i.e. <= 4K) writes.I plan on getting a G2 myself for my laptop after Intel updates the firmware to support TRIM and Anand reviews the effects in Windows 7, and I have already been using an Indilinx-based SLC drive in my home server.
If you do anything that stresses your hard drive(s), or just like snappy boot times and application load times you will probably be impressed by the speeds of a new SSD. The cost per GB and lack of long term reliability studies are really the only things holding them back from taking the storage market by storm now.
ninevoltz - Thursday, September 17, 2009 - link
GourdFreeMan could you please continue your explanation? I would like to learn more. You have really dived deeply into the physical properties of these drives.GourdFreeMan - Tuesday, September 1, 2009 - link
Minor correction to the second paragraph in my post above -- "each bit is only has" should read "each representation only has" in the last sentence.philosofool - Monday, August 31, 2009 - link
Nice job. This has been a great series.I'm getting a SSD once I can get one at $1/GB. I want a system/program files drive of at least 80GB and then a conventional HDD (a tenth of the cost/GB) for user data.
Would keeping user data on a conventional HDD affect these results? It would seem like it wouldn't, but I would like to see the evidence.
I would really like to see more benchmarks for these drives that aren't synthetic. Have you tried things like Crysis or The Witcher load times? (Both seemed to me to have pretty slow loads for maps.) I don't know if these would be affected, but as real world applications, I think it makes sense to try them out.
Anand Lal Shimpi - Monday, August 31, 2009 - link
Personally I keep docs on my SSD but I keep pictures/music on a hard drive. Neither gets touched all that often in the grand scheme of things, but one is a lot smaller :)In The SSD Anthology I looked at Crysis load times. Performance didn't really improve when going to an SSD.
Take care,
Anand
Eeqmcsq - Monday, August 31, 2009 - link
I would have thought that the read speed of an SSD would have helped cut down some of the compile time. Is there any tool that lets you analyze disk usage vs cpu usage during the compile time, to see what percentage of the compile was spent reading/writing to disk vs CPU processing?Is there any way you can add a temperature test between an HDD and an SSD? I read a couple of Newegg reviews that say their SSDs got HOT after use, though I think that may have just been 1 particular brand that I don't remember. Also, there was at least one article online that tested an SSD vs an HDD and the SSD ran a little warmer than the HDD.
Also, garbage collection does have one advantage: It's OS independent. I'm still using Ubuntu 8.04 at work, and I'm stuck on 8.04 because my development environment WORKS, and I won't risk upgrading and destabilizing it. A garbage collecting SSD would certainly be helpful for my system... though your compiling tests are now swaying me against an SSD upgrade. Doh!
And just for fun, have you thought about running some of your benchmarks on a RAM drive? I'd like to see how far SSDs and SATA have to go before matching the speed of RAM.
Finally, any word from JMicron and their supposed update to the much "loved" JMF602 controller? I'd like to see some non-stuttering cheapo SSDs enter the market and really bring the $$$/GB down, like the Kingston V-series. Also, I'd like to see a refresh in the PATA SSD market.
"Am I relieved to be done with this article? You betcha." And I give you a great THANK YOU!!! for spending the time working on it. As usual, it was a great read.
Per Hansson - Monday, August 31, 2009 - link
Photofast have released Indilinx based PATA drives;http://www.photofastuk.com/engine/shop/category/G-...">http://www.photofastuk.com/engine/shop/category/G-...
aggressor - Monday, August 31, 2009 - link
What ever happened to the price drops that OCZ announced when the Intel G2 drives came out? I want 128GB for $280!