Silicon Motion has practically become the new SandForce. Almost every tier three manufacturer (i.e. one with no controller/firmware IP or NAND fab) has released an SM2246EN based drive in the past ten months and recently Silicon Motion scored two major tier one partners (namely Micron/Crucial and SanDisk) as well. To be honest, this hasn't come as a surprise because the SM2246EN is a really solid controller with good performance and more importantly it's been mostly issue free (which is something that cannot be said about SandForce). 

Mushkin's Reactor combines the SM2246EN with Micron's latest 128Gbit 16nm MLC NAND, and this is actually the first time I've encountered a non-Micron/Crucial SSD with Micron's 16nm NAND. That really emphasizes the benefit NAND manufacturers have because Micron has been using 16nm NAND in its own SSDs for over six months now, but the company hasn't begun shipping it to others in volume until now. I suspect the volumes are still fairly low because the Reactor only comes in 1TB capacity, which is still fairly expensive and thus limits the demand to a level that is easier to manage compared to the more popular 256GB and 512GB models. I was told that 256GB and 512GB models may follow later, but as of now Mushkin will only be offering the Reactor in 1TB.

Mushkin Reactor Specifications
Capacity 1TB
Controller Silicon Motion SM2246EN
NAND Micron 128Gbit 16nm MLC
Sequential Read 560MB/s
Sequential Write 460MB/s
4KB Random Read 74K IOPS
4KB Random Write 76K IOPS
Encryption N/A
Endurance 144TB
Warranty Three years

In terms of features the Reactor is a fairly typical value drive without any special features. Neither hardware accelerated encryption nor DevSleep is supported, although the Reactor does support slumber power states for low idle power consumption. Endurance is a respectable 144TB, which translates to 131GB of writes per day for three years.

Moreover, the retail package doesn't include anything in addition to the drive itself and Mushkin offers no software/toolbox for its SSDs.

 

There are sixteen NAND packages on the PCB with eight on each side. Since we are dealing with a 128Gbit (16GB) die, that translates to four dies per package. Mushkin actually does the packaging in-house (i.e. buys NAND in wafers and then does the binning and packaging), which is why the packages lack the typical Micron logo and labels.

Test Systems

For AnandTech Storage Benches, performance consistency, random and sequential performance, performance vs. transfer size, and load power consumption we use the following system:

CPU Intel Core i5-2500K running at 3.3GHz (Turbo & EIST enabled)
Motherboard ASRock Z68 Pro3
Chipset Intel Z68
Chipset Drivers Intel 9.1.1.1015 + Intel RST 10.2
Memory G.Skill RipjawsX DDR3-1600 4 x 8GB (9-9-9-24)
Video Card Palit GeForce GTX 770 JetStream 2GB GDDR5 (1150MHz core clock; 3505MHz GDDR5 effective)
Video Drivers NVIDIA GeForce 332.21 WHQL
Desktop Resolution 1920 x 1080
OS Windows 7 x64

Thanks to G.Skill for the RipjawsX 32GB DDR3 DRAM kit

For slumber power testing we used a different system:

CPU Intel Core i7-4770K running at 3.3GHz (Turbo & EIST enabled, C-states disabled)
Motherboard ASUS Z87 Deluxe (BIOS 1707)
Chipset Intel Z87
Chipset Drivers Intel 9.4.0.1026 + Intel RST 12.9
Memory Corsair Vengeance DDR3-1866 2x8GB (9-10-9-27 2T)
Graphics Intel HD Graphics 4600
Graphics Drivers 15.33.8.64.3345
Desktop Resolution 1920 x 1080
OS Windows 7 x64
Performance Consistency & TRIM Validation
POST A COMMENT

69 Comments

View All Comments

  • Samus - Monday, February 9, 2015 - link

    The other performance "limitation" is program/file demands. The vast majority of files, program data, video files and game data are under 5000MB in size...so the difference between a 500MB/sec and a 1000MB/sec drive is a matter of seconds and sometimes milliseconds if reading files <1000MB.

    The spotlight for my SSD is loading Battlefield 4 levels. I noticed no difference when going from a SATA3 drive to an M2 PCIe drive. The limitation is somewhere else, possibly my 2 year old Xeon CPU? Who knows. But the only place I notice a difference is when unRARing lots of huge files. Since Windows 8.1 64-bit only loads about 800MB of data from the drive during boot (that's what I measured in NAND reads between reboots) again, the difference between a 500MB/sec and 1000MB/sec drive is virtually nothing for everyday computing.

    The demand will eventually come for faster SSD's in the consumer space (they're already in the enterprise space, they have been for years) but it probably won't come from Microsoft as their OS's are leaner and leaner every generation. Windows 8 had lower system requirements than Windows 7, and Windows 7 had lower system requirements than Vista. And Windows 10 will run on virtually anything, even Intel's Quark-based SoC dev platform (400MHz P55C "Pentium 3" based)
    Reply
  • Uplink10 - Wednesday, February 11, 2015 - link

    It is true Windows 8 has lower system requirements but I tested latest Windows 8.1 with updates and Windows 7 SP1 with updates and figured out that Windows 8.1 is more RAM hungry, I switched to Windows 8.1 and I keep getting messages about closing applications because I am too low on RAM. The real potential of SSD over HDD is random writing/reading not sequential. For me 6 Gbit/s sequential bandwidth is enough because I care about random write/read which doesn`t come close to 6 Gbit/s. Reply
  • Christopher1 - Sunday, February 15, 2015 - link

    One of your applications must be seriously RAM-hungry then. I have 4 or 5 browsers open and a few other programs at the same time, with page file turned off on a 8GB of RAM system and I never have it squawk about needing more RAM.
    Now, when I open a serious game like Arkham City, THEN it starts squawking from time to time because AAA-games are seriously RAM-hungry.
    Reply
  • Sabresiberian - Monday, February 9, 2015 - link

    You make a very good point Solandri, but the limit of the PCIe interface is far higher than the 800 MB/s figure you used; that is pretty much a bottom-end stat. Also, NVMe>AHCI. :) Reply
  • Solandri - Tuesday, February 10, 2015 - link

    Mathematically, PCIe can *never* speed things up more than the jump from SATA2 to SATA3. If you look carefully at the chart I made, going from SATA2 to SATA3 sped the 1GB read by 2 sec (from 4 sec to 2 sec).

    Since the total read time over SATA3 is already 2 sec, the only way to increase read speed by another 2 sec is to go up to infinite MB/s. Even if you used the 8 GB/s limit of PCIe x16, the read time would be 0.125 sec, or a 1.875 sec speedup. Less than the 2 sec you spend things up going from SATA2 to SATA3.

    The vast majority of the speed gains that can be gotten from SSDs (with respect to read/writes which benefit from PCIe) have already been gotten. Further improvements will be nice, but never impact computing to the degree that the initial SATA SSDs did. Read my comment to bug77 below for where we should be looking for significant speed gains next.
    Reply
  • Christopher1 - Sunday, February 15, 2015 - link

    Actually, there is way to speed up things: By making a architecture that can handle bigger 'chunks' of data at a time. As games and other things are getting bigger, you need to load more data at a time in a shorter period. Therefore, increasing the rate at which data from the hard drive can get to RAM is the bottleneck and there can be improvements in that.
    Either by pre-loading data to RAM (inefficient) or by making it so that data can get from the hard drive/SSD drive to the RAM faster when needed/called.
    Reply
  • bug77 - Tuesday, February 10, 2015 - link

    And that's not even the main advantage of SSDs. The thing the user notices the most is the nearly non-existent seek time. But you get that even from first generation of SSDs.
    As a refresher, try to remember what happened when you started multiple, parallel, copy operations on a single HDD: the transfer rate took a serious nose-dive and it was all because of the seek time.
    Reply
  • Tom Womack - Tuesday, February 10, 2015 - link

    This. And it has wonderful second-order consequences; for example, you can still use your computer while it's doing a backup, so it's reasonable to schedule extremely frequent backups to hard disc. Reply
  • Solandri - Tuesday, February 10, 2015 - link

    Yup, exactly this. Non-queued 4k read/writes are still mired around 30-70 MB/s due to limitations in seek time (generally imposed by file system overhead). While this is nearly two orders of magnitude faster than HDDs, it's still short of the SATA3 limit by an order of magnitude. So there's still a lot of improvement (time savings) which can be made on this front.

    Another way to think of it is that these 4k read/writes impact the time you spend waiting a *lot* more than the sequential read/writes. Again, because MB/s is the inverse of time you wait, the bigger MB/s figures matter less, the smaller MB/s figures matter more. Precisely the opposite of what MB/s seems to imply. e.g. If you need to read 1000 MB of sequential data + 1000 MB of 4k files over SATA3:

    2 sec = 1000 MB of sequential data @ 500 MB/s
    20 sec = 1000 MB of 4k data @ 50 MB/s

    So 90% of the total read time depends on the 4k read speed, only 10% depends on the peak sequential read everyone obsesses over. If you want a "fast" SSD, concentrate on getting a drive whose *smallest* MB/s figures (almost always the 4k speeds) are higher than the competition's.
    Reply
  • Christopher1 - Sunday, February 15, 2015 - link

    SATA3 does cause a little bit of an issue. It is the limiting factor on most drives now, no matter how much faster you make your driver, it's speed limitations bottleneck your hard drive. That is why M.2 is becoming so popular. Reply

Log in

Don't have an account? Sign up now