AnandTech Storage Bench Results –

Finding good, real world, storage benchmarks is next to impossible. The synthetic tests work up to a certain point but you need real world examples. Measuring performance in individual applications often ends up with charts like these where all of the drives perform identically. Benchmark suites like PCMark Vantage are the best we can do, but they are a bit too easy on these drives in my opinion and while representative of a real world environment, they aren’t necessarily representative of all real world environments.

Recently we’ve been able to get our hands on a piece of software that allows us to record all disk activity on a machine and then play it back on any other machine. The point is that we can now model a real world usage scenario without waiting for BAPCo or Futuremark to do it for us. The performance results are reported in I/O Operations per Second (IOPS).

The first in our new benchmark suite is a light usage application case. The Windows 7 system is loaded with Firefox, Office 2007 and Adobe Reader among other applications. With Firefox we browse web pages like Facebook, AnandTech, Digg and other sites. Outlook is also running and we use it to check emails, create and send a message with a PDF attachment. Adobe Reader is used to view some PDFs. Excel 2007 is used to create a spreadsheet, graphs and save the document. The same goes for Word 2007. We open and step through a presentation in PowerPoint 2007 received as an email attachment before saving it to the desktop. Finally we watch a bit of a Firefly episode in Windows Media Player 11.

There’s some level of multitasking going on here but it’s not unreasonable by any means. Generally the application tasks proceed linearly, with the exception of things like web browsing which may happen in between one of the other tasks.

The recording is played back on all of our drives here today. Remember that we’re isolating disk performance; all we’re doing is playing back every single disk access that happened in that ~5 minute period of usage. The light trace is composed of 37,501 reads and 20,268 writes. Over 30% of the IOs are 4KB, 11% are 16KB, 22% are 32KB and approximately 13% are 64KB in size. Less than 30% of the operations are absolutely sequential in nature. Average queue depth is 6.09 IOs.

AnandTech Storage Bench - Light Application Trace

The Marvell controller is 7% slower than the Intel controller with the WD drive and once again 7% with the Seagate drive. The WD drive is up to 24% faster than the Seagate drive depending on the controller configuration.

If there’s a light usage case there’s bound to be a heavy one. In this test we have Microsoft Security Essentials running in the background with real time virus scanning enabled. We also perform a quick scan in the middle of the test. Firefox, Outlook, Excel, Word and Powerpoint are all used the same as they were in the light test. We add Photoshop CS4 to the mix, opening a bunch of 12MP images, editing them, then saving them as highly compressed JPGs for web publishing. Windows 7’s picture viewer is used to view a bunch of pictures on the hard drive.

We then use 7-zip to create and extract .7z archives. Downloading is also prominently featured in our heavy test; we download large files from the Internet during portions of the benchmark, as well as use uTorrent to grab a couple of torrents. Some of the applications in use are installed during the benchmark, Windows updates are also installed. Towards the end of the test we launch World of Warcraft, play for a few minutes, then delete the folder. This test also takes into account all of the disk accesses that happen while the OS is booting.

The benchmark is 22 minutes long and it consists of 128,895 read operations and 72,411 write operations. Roughly 44% of all IOs were sequential. Approximately 30% of all accesses were 4KB in size, 12% were 16KB in size, 14% were 32KB and 20% were 64KB. Average queue depth was 3.59.

AnandTech Storage Bench - Heavy Application Trace

The Marvell controller is 19% slower than the Intel controller with the WD drive and 18% with the Seagate drive. The WD drive is up to 28% faster than the Seagate drive.

Our final test focuses on actual gameplay in four 3D games: World of Warcraft, Batman: Arkham Asylum, FarCry 2 and Risen, in that order. The games are launched and played, altogether for a total of just under 30 minutes. The benchmark measures game load time, level load time, disk accesses from saved games and normal data streaming during game play.

The gaming trace is made up of 75,206 read operations and only 4,592 write operations. Only 20% of the accesses are 4KB in size, nearly 40% are 64KB and 20% are 32KB. A whopping 69% of the IOs are sequential, meaning this is predominantly a sequential read benchmark. The average queue depth is 7.76 IOs.

AnandTech Storage Bench - Gaming Trace

Not much else to say here, the Marvell controller is 6% slower than the Intel controller with the WD drive and 9% with the Seagate drive. The WD drive is up to 8% faster than the Seagate drive.

Index File Transfers and Ending
POST A COMMENT

55 Comments

View All Comments

  • ipay - Monday, November 2, 2009 - link

    Buying or making a custom cable is the exact opposite of "easy" for people who expected the drive to work as advertised. Reply
  • SpatulaCity - Friday, October 30, 2009 - link

    I always thought there were 8 bits in a byte.
    6Gbps / 8 = 750MBps

    am I confused or missing something?
    Reply
  • peldor - Friday, October 30, 2009 - link

    8/10 encoding.

    http://en.wikipedia.org/wiki/10/8_encoding">http://en.wikipedia.org/wiki/10/8_encoding
    Reply
  • coyote2 - Thursday, October 29, 2009 - link

    I'd love to see how the (budget) Hitachi 7200rpm compares to the WD Caviar Black and this Seagate XT. thanks AnandTech! Reply
  • dragunover - Thursday, October 29, 2009 - link

    Reviews around the net show this is a great performer. Why not test more hard-drives like this one? Reply
  • Robear - Thursday, October 29, 2009 - link

    I really like the "less synthetic" IO benches, with the file transfers and all.

    Standardized benchmarks are a double-edged sword. On the one hand the standardized benchmark can be reproduced and hence verified, attributing to the credibility. On the other hand, firmware can be tweaked to the benchmarks to skew benchmarks and obscure actual performance.

    I'd like to see a mix of standardized and non-standardized (i.e. "Anandtech Benches") more in the future.

    I'd like to see real-world scenarios, like game install times, game load times, software builds, anti-virus scanning, etc.
    Reply
  • MadMan007 - Thursday, October 29, 2009 - link

    So SATA 6Gb/s implemented by an add-on chip will do little but add cost? Great :/ Reply
  • motigez - Thursday, October 29, 2009 - link

    well done for the Storage Bench, to my opinion, this is the only test which should matter to the end user, I would strongly recommend to keep it simple, no heavy and light, only one combination, and find some way to normalize it in comparison to SSD, this would finally allow users to tell which drives are better for them, which makes a difference and worth the premium, there is nothing we can do with pure sequential numbers...and base on my experience PcMark does not tell enough about user experience,
    Thanks AnandTech
    Reply
  • MrSpadge - Thursday, October 29, 2009 - link

    There are 2 things to take away from this:

    1. SATA 3 doesn't change anything for current HDDs. That's just as expected, because they'd have done something wrong if the current SATA 2 interface would already have been a limit. The push to higher performance interfaces always preceded their need - and that's good, but nothing to get excited about (either positive or negative).

    2. Seagate's performance is.. "mixed", if I choose the most euphemistic words. That's been the case since many years and I still don't understand why people would like them.
    Reply
  • chizow - Thursday, October 29, 2009 - link

    As I stated a few months ago when news of this 2TB drive broke, SATA 6.0 is hardly needed on mechanical drives as they're not even fully utilizing SATA 3.0, only a slight benefit from peak rates as this bench shows.

    Its quite clear the Marvell controller is also bottlenecked by these mechanical drives and overall less efficient than the Intel controller. The HD to HD transfers show the mechanical drives are typically the bottleneck, but the Intel drives write speeds are also bottlenecks in the HD to SSD benches. The only scenario the mechanical drives aren't bottlenecked is in the SSD to HD test where the high SSD reads allow for the higher mechanical drive transfers.

    I would've liked to have seen HD to SSD transfer results with an SSD with faster sustained write speeds, like one of the Indilinx drives that get ~200MB/s writes. I would've also liked to have seen all SSDs tested on one of these SATA 6.0 controllers to truly isolate its performance compared to SATA 3.0. I understand its beyond the scope of this review, it just seems like neither the controller nor the drive are up to the task of achieving potential performance.
    Reply

Log in

Don't have an account? Sign up now