The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Live Long and Prosper: The Logical Page
Computers are all about abstraction. In the early days of computing you had to write assembly code to get your hardware to do anything. Programming languages like C and C++ created a layer of abstraction between the programmer and the hardware, simplifying the development process. The key word there is simplification. You can be more efficient writing directly for the hardware, but it’s far simpler (and much more manageable) to write high level code and let a compiler optimize it.
The same principles apply within SSDs.
The smallest writable location in NAND flash is a page; that doesn’t mean that it’s the largest size a controller can choose to write. Today I’d like to introduce the concept of a logical page, an abstraction of a physical page in NAND flash.
Confused? Let’s start with a (hopefully, I'm no artist) helpful diagram:
On one side of the fence we have how the software views storage: as a long list of logical block addresses. It’s a bit more complicated than that since a traditional hard drive is faster at certain LBAs than others but to keep things simple we’ll ignore that.
On the other side we have how NAND flash stores data, in groups of cells called pages. These days a 4KB page size is common.
In reality there’s no fence that separates the two, rather a lot of logic, several busses and eventually the SSD controller. The latter determines how the LBAs map to the NAND flash pages.
The most straightforward way for the controller to write to flash is by writing in pages. In that case the logical page size would equal the physical page size.
Unfortunately, there’s a huge downside to this approach: tracking overhead. If your logical page size is 4KB then an 80GB drive will have no less than twenty million logical pages to keep track of (20,971,520 to be exact). You need a fast controller to sort through and deal with that many pages, a lot of storage to keep tables in and larger caches/buffers.
The benefit of this approach however is very high 4KB write performance. If the majority of your writes are 4KB in size, this approach will yield the best performance.
If you don’t have the expertise, time or support structure to make a big honkin controller that can handle page level mapping, you go to a larger logical page size. One such example would involve making your logical page equal to an erase block (128 x 4KB pages). This significantly reduces the number of pages you need to track and optimize around; instead of 20.9 million entries, you now have approximately 163 thousand. All of your controller’s internal structures shrink in size and you don’t need as powerful of a microprocessor inside the controller.
The benefit of this approach is very high large file sequential write performance. If you’re streaming large chunks of data, having big logical pages will be optimal. You’ll find that most flash controllers that come from the digital camera space are optimized for this sort of access pattern where you’re writing 2MB - 12MB images all the time.
Unfortunately, the sequential write performance comes at the expense of poor small file write speed. Remember that writing to MLC NAND flash already takes 3x as long as reading, but writing small files when your controller needs large ones worsens the penalty. If you want to write an 8KB file, the controller will need to write 512KB (in this case) of data since that’s the smallest size it knows to write. Write amplification goes up considerably.
Remember the first OCZ Vertex drive based on the Indilinx Barefoot controller? Its logical page size was equal to a 512KB block. OCZ asked for a firmware that enabled page level mapping and Indilinx responded. The result was much improved 4KB write performance:
Iometer 4KB Random Writes, IOqueue=1, 8GB sector space | Logical Block Size = 128 pages | Logical Block Size = 1 Page |
Pre-Release OCZ Vertex | 0.08 MB/s | 8.2 MB/s |
295 Comments
View All Comments
minime - Tuesday, September 1, 2009 - link
Thanks for that, but still, this is not quite a real business test, right?Live - Monday, August 31, 2009 - link
Great article! Again I might add.Just a quick question:
In the article it says all Indilinx drives are basically the same. But there are 2 controllers:
Indilinx IDX110M00-FC
Indilinx IDX110M00-LC
What's the difference?
yacoub - Monday, August 31, 2009 - link
If Idle Garbage Collection cannot be turned off, how can it be called "[Another] option that Indilinx provides its users"? If it's not optional, it's not an option. :(Anand Lal Shimpi - Monday, August 31, 2009 - link
Well it's sort of optional since you had to upgrade to the idle GC firmware to enable it. That firmware has since been pulled and I've informed at least one of the companies involved of the dangers associated with it. We'll see what happens...Take care,
Anand
helloAnand - Monday, August 31, 2009 - link
Anand,The best way to test compiler performance is compiling the compiler itself ;). GCC has an enormous test suite (I/O bound) to boot. Building it on windows is complicated, so you can try compiling the latest version on the mac.
Anand Lal Shimpi - Monday, August 31, 2009 - link
Hmm I've never played with the gcc test suite, got any pointers you can email me? :)Take care,
Anand
UNHchabo - Tuesday, September 1, 2009 - link
Compiling almost anything on Visual Studio also tends to be IO-bound, so you could try that as well.CMGuy - Wednesday, September 2, 2009 - link
We've got a few big java apps at work and the compile times are heavily I/O bound. Like it takes 30 minutes to build on a 15 disk SAN array (The cpu usage barely gets above 30%). Got a 160Gig G2 on order, very much looking forward to benchmarking the build on it!CMGuy - Sunday, October 11, 2009 - link
Finally got an X25-m G2 to benchmark our builds on. What was previously a 30 minute build on a 15 disk SAN array in a server has become a 6.5 minute build on my laptop.The real plus has come when running multiple builds simultaneously. Previously 2 builds running together would take around 50 minutes to complete (not great for Continuous Integration). With the intel SSD - 10 minutes and the bottleneck is now the CPU. I see more cores and hyperthreading in my future...
Ipatinga - Monday, August 31, 2009 - link
Another great article about SSD, Anand. Big as always, but this is not just a SSD review or roundup. It´s a SSD class.Here are my points about some stuff:
1 - Correct me if I´m wrong, but as far as capacity goes, this is what I know:
- Manufacturers says their drive has 80GB, because it actually has 80GB. GB comes from GIGA, wich is a decimal unit (base 10).
- Microsoft is dumb, so Windows follows it, and while the OS says your 80GB drive has 74,5GB, it should say 80GB (GIGA). When windows says 74,5, it should use Gi (Gibi), wich is a binary unit).
- To sum up, with a 80GB drive, Windows should say it has 80GB or 74,5GiB.
- A SSD from Intel has more space than it´s advertised 80GB (or 74,5GiB), and that´s to use as a spare area. That´s it. Intel is smart for using this (since the spare area is, well, big and does a good job for wear and performance over sometime).
2 - I wonder why Intel is holding back on the 320GB X25-M... just she knows... it must be something dark behind it...
Maybe, just maybe, like in a dream, Intel could be working on a 320GB X25-M that comes with a second controller (like a mirror of the one side pcb it has now). This would be awesome... like the best RAID 0 from two 160GB, in one X25-M.
3 - Indilinx seems to be doing a good job... even without TRIM support at it´s best, the garbage cleaning system is another good tool to add to a SSD. Maybe with TRIM around, the garbage cleaning will become more like a "SSD defrag".
4 - About the firmware procedure in Indilinx SSD goes, as far as I know, some manufacturers use the no-jumper scheme to make easier the user´s life, others offer the jumper scheme (like G.Skill on it´s Falcon SSD) to get better security: if the user is using the jumper and the firmware update goes bad, the user can keep flashing the firmware without any problem. Without the jumper scheme, you better get lucky if things don´t go well on the first try. Nevertheless, G.Skill could put the SSD pins closer to the edge... to put a jumper in those pins today is a pain in the @$$.
5 - I must ask you Anand, did you get any huge variations on the SSD benchmarks? Even with a dirty drive, the G.Skill Falcon (I tested) sometimes perform better than when new (or after wiper). The Benchmarks are Vantage, CrystalMark, HD Tach, HD Tune.... very weird. Also, when in new state, my Vantage scores are all around in all 8 tests... sometimes it´s 0, sometimes it´s 50, sometimes it´s 100, sometimes it´s 150 (all thousand)... very weird indeed.
6 - The SSD race today is very interesting. Good bye Seagate and WD... kings of HD... Welcome Intel, Super Talent, G.Skill, Corsair, Patriot, bla bla bla. OCZ is also going hard on SSD... and I like to see that. Very big line of SSD models for you to choose and they are doind a good job with Indilinx.
7 - Samsung? Should be on the edge of SSD, but manage to loose the race on the end user side. No firmware update system? You gotta be kidding, right? Thank good for Indilinx (and Intel, but there is not TRIM for G1... another mistake).
8 - And yes... SSD rocks (huge performance benefit on a notebook)... even though I had just one weekend with them. Forget about burst speed... SSD crushes hard drives where it matters, specially sequencial read/write and low latency.
- Let me finish here... this comment is freaking big.