This year at CES, one of the interesting things at the Kingston suite was a demonstration of its new enterprise-grade DCU1000 SSDs. The current U.2 drives use four consumer-grade KC1000 M.2 SSDs behind a PCIe switch to offer up to 3.2 TB of useful capacity as well as a massive aggregated random read/write performance. The capacity of the overall drive at this point is limited to the M.2 drives being used.

The Kingston DCU1000 is a U.2 backplane internally, with four M.2 slots, integrated power loss protection, and the Avago ExpressLane PEX 8725 24-lane 10-port PCIe switch. The switch enables four M.2 drives to be used over a single U.2/SFF-8639 interface (PCIe 3.0 x4) and supports hot plugging. Kingston uses four KC1000 SSDs with custom firmware for its DCU1000 and plans to offer the U.2 drive in capacities of up to 3200 GB in the second quarter.

Kingston’s KC1000 SSDs are based on the Phison PS5007-E7 controller as well as planar MLC NAND. The drives are normally available in 240 GB, 480 GB and 960 GB configurations, but for the DCU1000 the manufacturer uses KC1000 800 GB SSDs. The lower capacity suggests that Kingston allocates a significant part of the onboard NAND memory for overprovisioning to compensate for using consumer-grade MLC.

The four drives installed into one DCU1000 are considered as independent software devices, but may operate in software RAID 0 mode to maximize performance. Obviously, the DCU1000 cannot hit anything higher than 3.8-3.9 GB/s with sequential reads due to PCIe 3.0 x4 limitation even in RAID 0, as any other U.2 drive. Meanwhile, Kingston advertises an aggregated read speed of 30 GB/s and an aggregated write speed of up to 27 GB/s for a 1U box containing 10 DCU1000 drives. As for random read reads, Kingston indicates that the box is capable 7M/6.6M read/write 4K IOPS (presumably at high queue depths).

While the Kingston DCU1000 can potentially offer a rather high aggregated performance, its capacity of 3.2 TB may be insufficient for certain mixed-use environments. Apparently, the company is already working on a special version of its next-gen consumer flagship drive based on 3D NAND that it will install into the DCU1000 to double capacity and random performance.

Usage of multiple M.2 SSDs to build high-capacity/high-performance server-/workstation-grade drives is nothing new: HP, Seagate and some other companies offer PCIe storage solutions employing multiple SSD modules. Kingston will be among the first well-known brands to use M.2 drives for a server-grade U.2 SSD. The architecture has its pros and cons. On the one hand, Kingston does not need to use enterprise-grade SSD controllers and procure huge amounts of enterprise-grade NAND flash specifically for server drives (that may sit in stock for a while). Besides, it can relatively quickly start using different drives (with proper firmware) with the DCU1000 backplane. On the other hand, enterprise controllers have a feature set that is better suited for datacenter environments and the PEX 8725 switch most probably eats most, if not all, the savings that Kingston gets by not using an enterprise-grade SSD controller.

The reason for designing such a drive, using a PCIe switch and M.2 drives, comes down to its intended use cases. U.2 drives are hot-swap, and being behind a PCIe switch allows the M.2 drives to also gain that functionality. The use-case presented to us in our briefing was one for video editing, where a film studio has a server/machine full of these drives, and when a days recording is done, the drives can be packed up and shipped to a visual effects studio (either by courier, or by placing an intern on a flight) to do their magic. This is commonly known as sneaker net, and offers much better bandwidth than transferring the raw 8K/16K footage through fat internet pipes. (Big data center services, like Google/Amazon, literally ship petabytes of data around using couriers, as the overall bandwidth is quicker.) The key to doing this is a combination of featureset (hotswappable, power loss protection) and storage density. While the first drives available will be in the 3.2 TB range, Kingston are ready and waiting to move forward with a 6.4 TB version when the M.2 drives double in capacity. One minute of uncompressed 16-bit 8K video comes in 284 GB, so the higher the capacity, the better. Having good speed and good random performance helps as well.

The KC1000 will be available to select customers of Kingston in the coming months. Pricing will depend on purchase volumes and other factors.

Related Reading

Source: Kingston

Comments Locked

12 Comments

View All Comments

  • evlfred - Friday, January 19, 2018 - link

    wow that's a lot of wasted storage if you have 4 800GB SSD's and only end up with 6.4GB to use
  • descendency - Friday, January 19, 2018 - link

    It's to prevent durability issues during writes.

    /s
  • romrunning - Friday, January 19, 2018 - link

    Marvell announced a NVMe RAID controller for SSDs (Marvell® 88NV1140/88NV112). Sounds like Kingston should have went with them for the controller instead of the Phison controller plus the PEX switch.

    Overall, I like the idea of multiple M.2 drives in the 2.5" U.2 form factor. Seems like a lot of mfg's are creating these products, whether in U.2 format or AIC.

    I do wonder how they will alleviate the heat of the M.2 drives when encased. Perhaps a heat spreader that connects to the case as a larger heatsink? Then you could cool them with server fans.
  • Billy Tallis - Friday, January 19, 2018 - link

    Those Marvell controllers are just ordinary SSD controllers. They don't have any PCIe switch or multi-drive RAID features. Kingston has a longtime relationship with Phison, especially for their enterprise PCIe products.

    The heat of the M.2 drives won't be much of a problem compared to the heat of that PCIe switch. This drive is going to require serious airflow.
  • zsero - Friday, January 19, 2018 - link

    Maybe GB -> TB?
  • dromoxen - Friday, January 19, 2018 - link

    presumably they can go even higher than 6.4TB when the chips are down?
  • Santoval - Friday, January 19, 2018 - link

    I don't understand why they thought it was a good idea to choke 16 PCIe 3.0 lanes into 4, and thus create a bottleneck. Why not use it with a PCIe x16 slot, either via an adapter or directly? Is it a prerequisite for enterprise SSDs to use solely the U.2 slot? Is this only intended for 1U server blades where the available height is limited?
  • CheapSushi - Friday, January 19, 2018 - link

    They already have that. The DCP1000. There are other quad M.2 PCIe adapters as well. They're simply giving another option because all flash arrays becoming more popular. There are already a lot of server chassis with 2.5" hotswap bays, usually 24 in a 2U. M.3 is coming out with allows even higher density in 1U. But for existing servers, this would allow even higher density in already made standard 2U 2.5" chassis. With QLC NAND coming out, which could be several terabytes per stick, it gives you a lot of density per drive. Not all drives are just about performance but you still have to have a capacity tier. So say you have a 4TB 2.5" SSD. In a 2U, that's 96TB in a 2U (24 drives). With quad M.2 on a 2.5" like this, that's 384TB (4x24). That's SIGNIFICANTLY higher for the same chassis design. With QLC, they'll be even higher density. So again, you even get further hotswap and redudancny. Because if a 2.5" goes bad, you replace the whole drive. On here, a single M.2 might go back, so you replace one M.2, while 3 are still there doing their thing. It makes a lot of sense actually. The best thing as usual is having lots of options. There should never just be one way to do something. Some things make sense to do one way, some things make sense to do another way, somethings only make sense if starting fresh, something makes sense with already bought infrastructure, etc. And Kingston already offers stuff for x16 slots. U.2 is x4. So they'd have to come out with a new standard for x16 U.2.
  • rpg1966 - Saturday, January 20, 2018 - link

    I'm trying to picture how you replace one M.2 stick while leaving the others (apparently) functioning. It *looks* like you'd have to pull the thing apart to get at one of the M.2 sticks, meaning it couldn't still be reading/writing to the other three?
  • Hereiam2005 - Friday, January 19, 2018 - link

    Enterprise applications such as databases need iops and raw capacity more than aggregated gbps.
    x16 link is physically much larger than x4 link, reduce density and therefore is not needed.
    Cpus also have a limited number of pcie links anyway, so why waste those links when what you want is to pack as many of then into a chassis as possible and let parallelism takes care of bandwidth?

Log in

Don't have an account? Sign up now