The SSD Relapse: Understanding and Choosing the Best SSD
by Anand Lal Shimpi on August 30, 2009 12:00 AM EST- Posted in
- Storage
Intel's X25-M 34nm vs 50nm: Not as Straight Forward As You'd Think
It took me a while to understand exactly what Intel did with its latest drive, mostly because there are no docs publicly available on either the flash used in the drives or on the controller itself. Intel is always purposefully vague about important details, leaving everything up to clever phrasing of questions and guesswork with tests and numbers before you truly uncover what's going on. But after weeks with the drive, I think I've got it.
X25-M Gen 1 | X25-M Gen 2 | |
Flash Manufacturing Process | 50nm | 34nm |
Flash Read Latency | 85 µs | 65 µs |
Flash Write Latency | 115 µs | 85 µs |
Random 4KB Reads | Up to 35K IOPS | Up to 35K IOPS |
Random 4KB Writes | Up to 3.3K IOPS | Up to 6.6K IOPS (80GB) Up to 8.6K IOPS (160GB) |
Sequential Read | Up to 250MB/s | Up to 250MB/s |
Sequential Write | Up to 70MB/s | Up to 70MB/s |
Halogen-free | No | Yes |
Introductory Price | $345 (80GB) $600 - $700 (160GB) | $225 (80GB) $440 (160GB) |
The old X25-M G1
The new X25-M G2
Moving to 34nm flash let Intel drive the price of the X25-M to ultra competitive levels. It also gave Intel the opportunity to tune controller performance a bit. The architecture of the controller hasn't changed, but it is technically a different piece of silicon (that happens to be Halogen-free). What has changed is the firmware itself.
The old controller
The new controller
The new X25-M G2 has twice as much DRAM on-board as the previous drive. The old 160GB drive used a 16MB Samsung 166MHz SDRAM (CAS3):
Goodbye Samsung
The new 160GB G2 drive uses a 32MB Micron 133MHz SDRAM (CAS3):
Hello Micron
More memory means that the drive can track more data and do a better job of keeping itself defragmented and well organized. We see this reflected in the "used" 4KB random write performance, which is around 50% higher than the previous drive.
Intel is now using 16GB flash packages instead of 8GB packages from the original drive. Once 34nm production really ramps up, Intel could outfit the back of the PCB with 10 more chips and deliver a 320GB drive. I wouldn't expect that anytime soon though.
The old X25-M G1
The new X25-M G2
Low level performance of the new drive ranges from no improvement to significant depending on the test:
Note that these results are a bit different than my initial preview. I'm using the latest build of Iometer this time around, instead of the latest version from iometer.org. It does a better job filling the drives and produces more reliable test data in general.
The trend however is clear: the new G2 drive isn't that much faster. In fact, the G2 is slower than the G1 in my 4KB random write test when the drive is brand new. The benefit however is that the G2 doesn't drop in performance when used...at all. Yep, you read that right. In the most strenuous case for any SSD, the new G2 doesn't even break a sweat. That's...just...awesome.
The rest of the numbers are pretty much even, with the exception of 4KB random reads where the G2 is roughly 11% faster.
I continue to turn to PCMark Vantage as the closest indication to real world performance I can get for these SSDs, and it echoes my earlier sentiments:
When brand new, the G1 and the G2 are very close in performance. There are some tests where the G2 is faster, others where the G1 is faster. The HDD suite shows the true potential of the G2 and even there we're only looking at a 5.6% performance gain.
It's in the used state that we see the G2 pull ahead a bit more, but still not drastic. The advantage in the HDD suite is around 7.5%, but the rest of the tests are very close. Obviously the major draw to the 34nm drives is their price, but that can't be all there is to it...can it?
The new drives come with TRIM support, albeit not out of the box. Sometime in Q4 of this year, Intel will offer a downloadable firmware that enables TRIM on only the 34nm drives. TRIM on these drives will perform much like TRIM does on the OCZ drives using Indilinx' manual TRIM tool - in other words, restoring performance to almost new.
Because it can more or less rely on being able to TRIM invalid data, the G2 firmware is noticeably different from what's used in the G1. In fact, if we slightly modify the way I tested in the Anthology I can actually get the G1 to outperform the G2 even in PCMark Vantage. In the Anthology, to test the used state of a drive I would first fill the drive then restore my test image onto it. The restore process helped to fragment the drive and make sure the spare-area got some use as well. If we take the same approach but instead of imaging the drive we perform a clean Windows install on it, we end up with a much more fragmented state; it's not a situation you should ever encounter since a fresh install of Windows should be performed on a clean, secure erased drive, but it does give me an excellent way to show exactly what I'm talking about with the G2:
PCMark Vantage (New) | PCMark Vantage HDD (New) | PCMark Vantage (Fragmented + Used) | PCMark Vantage HDD (Fragmented + Used) | |
Intel X25-M G1 | 15496 | 32365 | 14921 | 26271 |
Intel X25-M G2 | 15925 | 33166 | 14622 | 24567 |
G2 Advantage | 2.8% | 2.5% | -2.0% | -6.5% |
Something definitely changed with the way the G2 handles fragmentation, it doesn't deal with it as elegantly as the G1 did. I don't believe this is a step backwards though, Intel is clearly counting on TRIM to keep the drive from ever getting to the point that the G1 could get to. The tradeoff is most definitely performance and probably responsible for the G2's ability to maintain very high random write speeds even while used. I should mention that even without TRIM it's unlikely that the G2 will get to this performance state where it's actually slower than the G1; the test just helps to highlight that there are significant differences between the drives.
Overall the G2 is the better drive but it's support for TRIM that will ultimately ensure that. The G1 will degrade in performance over time, the G2 will only lose performance as you fill it with real data. I wonder what else Intel has decided to add to the new firmware...
I hate to say it but this is another example of Intel only delivering what it needs to in order to succeed. There's nothing that keeps the G1 from also having TRIM other than Intel being unwilling to invest the development time to make it happen. I'd be willing to assume that Intel already has TRIM working on the G1 internally and it simply chose not to validate the firmware for public release (an admittedly long process). But from Intel's perspective, why bother?
Even the G1, in its used state, is faster than the fastest Indilinx drive. In 4KB random writes the G1 is even faster than an SLC Indilinx drive. Intel doesn't need to touch the G1, the only thing faster than it is the G2. Still, I do wish that Intel would be generous to its loyal customers that shelled out $600 for the first X25-M. It just seems like the right thing to do. Sigh.
295 Comments
View All Comments
Wwhat - Sunday, September 6, 2009 - link
If you read the first part of the article alone you would see how important a good controller is in a SSD and you would no ask his question probably, plus SSD's use the flash in parallel where a bunch of USB drives would not, the parallel thing is also mentioned in the article.And USB has a lot of overhead actually on the system, both in CPU cycles as well as in IO interrupts.
There are plug in PCI(e) cards to stick SD cards in though, to get a similar setup, but it's a bit of a hack and with the overhead and the management and controllers used and the price to buy many SD cards it's not competitive in the end and you are better of with a real SSD I'm told.
Transisto - Sunday, September 6, 2009 - link
You are right, the controller is very important.I think caching about 4-8 gig of most often accessed program files has the best price/performance ratio, for improving application load time. It it also very easily scalable.
One of the problem I see is integrating this ssd cache in the OS or before booting so it act where it matter the most.
I think there could be a near x25-m speedup from optimized caching and good controller no matter what SSD form factor it rely on. SD, CF, usb , pci or onboard.
Why it seams nobody talk about eboostr type of caching AND ,,, on other news ,,, Intel's Braidwood flash memory module could kill SSD market.
I am quite of a performance seeker.
But I don't think I need 80gig of SSD in my desktop,just some 8gb of good caching. Mabe a 60gb ssd on a laptop.
Well... I'm gonna pay for that controller once, not twice (160gb?)
Wwhat - Saturday, September 5, 2009 - link
Not that it's not a good article, although it does seem like 2 articles in one, but what I miss is getting to brass tacks regarding the filesystem used, and why there isn't a SSD-specific file system made, and what choices can be made during formatting in regards to blocksize, obviously if you select large blocks on filesystem level a would impact he performance of the garbage collection right? It actually seem the author never delved very deeply into filesystems from reading this.The thing is that even with large blocks on filesystem level the system might still use small segments for the actuall keepin track, and if it needs to write small bits to keep track of large blocks you'd still have issues, that's why I say a specific SSD filesystem migh be good, but only if there isn't a new form of SSD in the near future that makes the effort poinless, and if a filesystem for SSD was made then the firmware should not try to compensate for exising filesystem issues with SSD's.
I read that the SD people selected exFAT as filesystem for their next generation, and that also makes me wonder, is that just to do with licensing costs or is NTFS bad for flash based devices?
Point being at the filesystem needs to be highlighted more I think,
Bolas - Friday, September 4, 2009 - link
Would someone please hit Dell with the clue-board and convince them to offer the Intel SSD's in their Alienware systems? The Samsung SSD's are all that is stopping me from buying an Alienware laptop at the moment.EatTheMeat - Friday, September 4, 2009 - link
Congratulations on another fab masterclass. This is easily the best educational material on the internet regarding SSDs, and contrary to some comments, I think you've pitched your recommendations just right. I can also appreciate why you approached this article with some trepidation. Bravo.I have a RAID question for Anand (or anyone else who feels qualified :-))
I'm thinking of setting up 2 160GB x25-m G2 drives in RAID-0 for Win 7. I'd simply use the ICH10R controller for it. It's not so much to increase performance but rather to increase capacity and make sure each drive wears equally. After considering it further I'm wondering if SSD RAID is wise. First there's the eternal question of stripe size and write amplification. It makes sense to me to set the stripe size to be the same as, or a fraction of, the block size of the SSD. If you choose the wrong stripe size does it influence write amplification?
I'm aware that performance should increase with larger spripes, but I'm more concerned about what's healthy for the SSD.
Do you think I should just let SSD RAID wait until RAID drivers are optimised for SSDs?
I know you're planning a RAID article for SSDs - I for one look forward to it greatly. I've read all your other SSD articles like four times!
Bolas - Friday, September 4, 2009 - link
If SSD's in RAID lose the benefit of the TRIM command, then you're shooting yourself in the foot if you set them up in RAID. If you need more capacity, wait for the Intel 320GB SSD drives next year. Or better yet, use a 160 GB for your boot drive, then set up some traditional hard disk drives in RAID for your storage requirements.EatTheMeat - Friday, September 4, 2009 - link
Thanks for reply. I definitely hear you about the TRIM functionality as I doubt RAID drivers will pass this through before 2010. Still though, it doesn't look like the G2s drop much in performance with use anyway from Anand's graphs. With regard to waiting for 320 GB drives - I can't. These things are just too enticing, and you could always say that technology will be better / faster / cheaper next year. I've decided to take the plunge now as I'm fed up with an i7 965 booting and loading apps / games like a snail even from a RAID drive.I just don't want to bugger the SSDs up with loads of write amplification / fragmentation due to RAID-0. ie, is RAID-0 bad for the health of SSDs like defragmentation / prefetch is? I wonder if anyone knows the answer to this question yet.
jagreenm - Saturday, September 5, 2009 - link
What about just using Windows drive spanning for 2 160's?EatTheMeat - Saturday, September 5, 2009 - link
As far as I know drive spanning doesn't even the wear between the discs. It just fills up first one and then the other. That's important with SSDs because RAID can really help reduce drive wear by spreading all reads and writes across 2 drives. In fact, it should more than half drive wear as both drives will have large scratch portions. Not so with spanning as far as I know.Does anyone know if I'm talking sh1t here? :-)
pepito - Monday, November 16, 2009 - link
If you are not sure, then why do you assert such things?I don't know about Windows, but at least in Linux when using LVM2 or RAID0 the writes spread evenly against all block devices.
That means you get twice the speed and better drive wear.
I would like to think that microsoft's implementation works more or less the same way, as this is completely logical (but then again, its microsoft, so who can really know?).