The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Live Long and Prosper: The Logical Page
Computers are all about abstraction. In the early days of computing you had to write assembly code to get your hardware to do anything. Programming languages like C and C++ created a layer of abstraction between the programmer and the hardware, simplifying the development process. The key word there is simplification. You can be more efficient writing directly for the hardware, but it’s far simpler (and much more manageable) to write high level code and let a compiler optimize it.
The same principles apply within SSDs.
The smallest writable location in NAND flash is a page; that doesn’t mean that it’s the largest size a controller can choose to write. Today I’d like to introduce the concept of a logical page, an abstraction of a physical page in NAND flash.
Confused? Let’s start with a (hopefully, I'm no artist) helpful diagram:
On one side of the fence we have how the software views storage: as a long list of logical block addresses. It’s a bit more complicated than that since a traditional hard drive is faster at certain LBAs than others but to keep things simple we’ll ignore that.
On the other side we have how NAND flash stores data, in groups of cells called pages. These days a 4KB page size is common.
In reality there’s no fence that separates the two, rather a lot of logic, several busses and eventually the SSD controller. The latter determines how the LBAs map to the NAND flash pages.
The most straightforward way for the controller to write to flash is by writing in pages. In that case the logical page size would equal the physical page size.
Unfortunately, there’s a huge downside to this approach: tracking overhead. If your logical page size is 4KB then an 80GB drive will have no less than twenty million logical pages to keep track of (20,971,520 to be exact). You need a fast controller to sort through and deal with that many pages, a lot of storage to keep tables in and larger caches/buffers.
The benefit of this approach however is very high 4KB write performance. If the majority of your writes are 4KB in size, this approach will yield the best performance.
If you don’t have the expertise, time or support structure to make a big honkin controller that can handle page level mapping, you go to a larger logical page size. One such example would involve making your logical page equal to an erase block (128 x 4KB pages). This significantly reduces the number of pages you need to track and optimize around; instead of 20.9 million entries, you now have approximately 163 thousand. All of your controller’s internal structures shrink in size and you don’t need as powerful of a microprocessor inside the controller.
The benefit of this approach is very high large file sequential write performance. If you’re streaming large chunks of data, having big logical pages will be optimal. You’ll find that most flash controllers that come from the digital camera space are optimized for this sort of access pattern where you’re writing 2MB - 12MB images all the time.
Unfortunately, the sequential write performance comes at the expense of poor small file write speed. Remember that writing to MLC NAND flash already takes 3x as long as reading, but writing small files when your controller needs large ones worsens the penalty. If you want to write an 8KB file, the controller will need to write 512KB (in this case) of data since that’s the smallest size it knows to write. Write amplification goes up considerably.
Remember the first OCZ Vertex drive based on the Indilinx Barefoot controller? Its logical page size was equal to a 512KB block. OCZ asked for a firmware that enabled page level mapping and Indilinx responded. The result was much improved 4KB write performance:
Iometer 4KB Random Writes, IOqueue=1, 8GB sector space | Logical Block Size = 128 pages | Logical Block Size = 1 Page |
Pre-Release OCZ Vertex | 0.08 MB/s | 8.2 MB/s |
295 Comments
View All Comments
GourdFreeMan - Tuesday, September 1, 2009 - link
You would, in fact, be incorrect. I refer you to ANSI/IEEE Std 1084-1986, which defines kilo, mega, etc. as powers of two when used to refer to sizes of computer storage. It was common practice to use such definitons in Computer Science from the 1970s until standards were changed in 1991. As many people reading Anandtech received their formal education during this time period, it is understandable that the usage is still commonplace.Undersea - Monday, August 31, 2009 - link
Where was this article two weeks ago before I bought my OCZ summit? I hope this little article will jump start samsung.Thanks for all the hard work :)
FrancoisD - Monday, August 31, 2009 - link
Hi Anand,Great article, as always. I've been following your site since the beginning and it's still the best one out there today!
I mainly use Mac's these days and was wondering if you knew anything about Apple's plans for TRIM??
Thanks for all the fantastic work, very technical yet easy to understand.
François
Anand Lal Shimpi - Monday, August 31, 2009 - link
Thanks for your support over the years :)No word on Apple's plans for TRIM yet, I am digging though...
Take care,
Anand
Dynotaku - Monday, August 31, 2009 - link
Amazing article as always, now I just need one that shows me how to install just Win 7 and my Steam folder to the SSD and move Program Files and "My Documents" or whatever it's called in Win7 to a mechanical disk.GullLars - Monday, August 31, 2009 - link
A really great article with loads of data.I only have one complaint. The 4kb random read/write tests in IOmeter was done with QD=3, this simulates a really light workload, and does not allow the controllers to make use of the potential of all their flash channels. I've seen intels x25-M scale up to 130-140 MB/s of 4KB random read @ QD=64 (medium load) with AHCI activated. I have not yet tested my Vertex SSDs or Mtron Pro's, but i suspect they also scale well beyond QD=3.
It would also be usefull to compare the different tests in the HDDsuite in PCmark vantage instead of only the total score.
Anand Lal Shimpi - Monday, August 31, 2009 - link
The reason I chose a queue depth of 3 is because that's, on average, what I found when I tried heavily (but realistically) loading some Windows desktop machines. I rarely found a queue depth over 5. The super high QDs are great for enterprise workloads but I don't believe they do a good job at showcasing single user desktop/notebook performance.I agree about the individual HDD suite tests, I was just trying to cut down on the number of graphs everyone had to mow through :)
Take care,
Anand
heulenwolf - Monday, August 31, 2009 - link
Anand,I'd like to add my thanks to the many in the comments. Your articles really do stand out in their completeness and clarity. Well done.
I'm hoping you or someone else in the forums can shed some light on a problem I'm having. I got talked into getting a Dell "Ultraperformance" SSD for my new work system last year. Its a Samsung-branded SLC SSD 64 GB capacity. As your results predict, its really snappy when its first loaded and performance degrades after a few months with the drive ~3/4 full. One thing I haven't seen predicted, though, is that the drives have only lasted 6 months. The first system I received was so unstable without explanation that we convinced Dell to replace the entire machine. Since then, I'm now on my second SSD refurb replacement under warranty. In both SDD failures, the drive worked normally for ~6 months, then performance dropped to 5-10 MB/sec, Vista boot times went up to ~15 minutes, and I paid dearly in time for every single click and keypress. Once everything finally loaded, the system behaved almost normally. Dell's own diagnostics pointed to bad drives, yet, in each case, the bad SSD continued to work just at super slow speeds. I was careful to disable Vista's automatic defrag with every install.
My IT staff has blamestormed first Vista (we're still mostly an XP shop) and now SSDs in general as the culprit. They want me to turn in the SSD and replace it with a magnetic hard drive. So, my question is how to explain this:
A) Am I that 1 in a bazillion case of having gotten a bad system followed by a bad drive followed by another bad drive
B) Is there something about Vista - beyond auto defrag - that accelerates the wear and tear on these drives
C) Is there something about Samsung's early SSD controllers that drops them to a lower speed under certain conditions (e.g. poorly implemented SMART diagnostics)
D) Is my IT department right and all SSDs are evil ;)?
Ardax - Monday, August 31, 2009 - link
Well, first you could point them to this article to point out how bad the Samsung SSDs are. Replace it with an Intel or Indilinx-based drive and you should be fine. Anecdotes so far indicate that people have been beating on them for months.As far as configuring Vista for SSD usage, MS posted in the Engineering Windows 7 Blog about what they're doing for SSDs. [url=http://blogs.msdn.com/e7/archive/2009/05/05/suppor...">http://blogs.msdn.com/e7/archive/2009/0...nd-q-a-f...]Article Link[/url].
The short version of it is this: Disable Defrag, SuperFetch, ReadyBoost, and Application and Boot Prefetching. All these technologies were created to work around the low random read/write performance of traditional HDs and are unnecessary (or unhealthy, in the case of defrag) with SSDs.
heulenwolf - Monday, August 31, 2009 - link
Thanks for the reply, Ardax. Unfortunately, the choice of SSD brand was Dell's. As Anand points out, OEM sales is where Samsung's seems to have a corner on the market. The choices are: Samsung "Ultraperformance" SSD, Samsung not-so-ultraperformance SSD, Magnetic HDD, or void the warranty by getting installing a non-Dell part. I could ask that we buy a non-Dell SSD but since installing it would preclude further warranty support from Dell and all SSDs have become the scapegoat, I doubt my request would be accepted. Additionally, the article doesn't say much about drive reliability which is the fundamental problem in my case.I'll look into the linked recommendations on Win 7 and SSDs. I had already done some research on these features and found the general concensus to be that leaving any of them enabled (with the exception of defrag) should do no harm.