For the few server-oriented business units I visited during Computex, a number of them were showing new Avoton oriented rackmount microservers.  At GIGABYTE's Server unit, I was shown the A201-TR, a 2U microserver featuring 46 nodes that each could be configured with either a 20W eight-core Atom CPU node or a dual 2.5-inch SSD storage node.  The purpose of such a server is for lots of threaded workloads that each require minimal processing time, and GIGABYTE’s offering is a mix-and-match affair based on customer need.

The system will initially be offered in 46 CPU or 28 CPU + 16 Storage variants, both with 4x 40GbE QSFP+ network integration as well as dual GbE.  Unit access is from the top, and the toolless loading system helps facilitate any maintenance.  With 46 lots of 20W coming from the nodes, plus extras in terms of networking, the system would draw 1000W+ easily, so the 2U will come with dual 1600W redundant power supplies.

Each C2750 node can be equipped with four SO-DIMM memory modules and a mSATA drive.

I am a big fan of having high compute density, although the practicality of Silvermont cores in a server environment will be represented best by an ideal usage scenario.  Hopefully we can get Johan’s thoughts if he gets a microserver similar to this in to test.

Comments Locked

11 Comments

View All Comments

  • extide - Thursday, June 12, 2014 - link

    That's pretty sweet! So does this thing have virtualized storage and/or networking or is it just a simple setup where each drive is mapped directly to a specific node? (I am talking about when using the 2x2.5" storage nodes) Also how is the networking done? Is there an internal switch? Very cool, though!
  • groundhogdaze - Thursday, June 12, 2014 - link

    Those nodes remind me of the old slot type pentiums. Remember those? That is some good server pr0n there.
  • MrSpadge - Thursday, June 12, 2014 - link

    Except that these are systems instead of CPU cards :)
  • creed3020 - Thursday, June 12, 2014 - link

    This is a very interesting product!

    I took would like to know how the storage nodes are made available to the compute nodes. The exciting part of this solution in the scalability. Additional nodes can be made available dynamically when compute needs grow and potentially be brought down when loads are lower. Should be very power efficient to utilize the nodes in such a fashion.

    The 2.5" form factor for the SSDs seems somewhat odd when you think about how the space could be better served by a PCIe type form factor but then Gigabyte would have to come up with some proprietary design which breaks the freedom of dropping in whatever SSD you prefer or require.
  • Vepsa - Thursday, June 12, 2014 - link

    I'm guessing the engineering sample is 2.5" while final release will be mSATA based on the article. However, either would be fine for the usage these systems will get.
  • DanNeely - Thursday, June 12, 2014 - link

    I think you're confusing two different items. Each CPU module will have an mSATA connection for onboard storage. The enclosure also allows you to install 16 2x2.5" drive modules in place of CPU modules to increase the amount of on chassis storage present if you need more than a single mSata drive per CPU can offer (it's not clear if options other than 28 CPU+ 16 SSD modules or 46 cpus are possible or if different backplanes are used) .
  • DanNeely - Thursday, June 12, 2014 - link

    Probably for cost reasons/reusing existing designs. IF they go with sata/sas they can recycle designs for existing high density storage servers to for the PCB layers carrying the connections for the drive modules; PCIe based storage is new enough that they'd probably have to design something from scratch to implement it.
  • DanNeely - Thursday, June 12, 2014 - link

    The cpu and cpu + SSD module combinations don't add up or match the total number of modules shown in the enclosure. 28 CPU + 16 storage gives 44 total modules vs 46 for the CPU only version. Also I count 48 total modules in it, 18 in the two large rows 12 in the smaller one.

    Is this an error in the article, or are 2/4 slots taken up by control hardware for the array as a whole?
  • Ian Cutress - Thursday, June 12, 2014 - link

    I've got the product guide booklet in front of me, and it states 28 CPU + 16 storage. Nice spot; I don't know what's happened to the other two.

    I can concur your 48 count, although given the first image I took, and the product guide, it says 46. There are two off-color modules in the middle row, but I'm not sure what these are for. I'll fire of an email, but I know my GIGABYTE Server contact is on holiday this week. When I get a response I'll update this post.
  • Ian Cutress - Friday, June 13, 2014 - link

    I have an answer!

    "There are indeed 48 nodes, but two of them are occupied by traffic management controller nodes (the darker grey ones in the middle row) which must be there independently from the nodes configuration.

    For the storage configuration that's probably a typo, the correct one being 30+16 or 28+18."

Log in

Don't have an account? Sign up now