LRDIMMs, RDIMMs, and Supermicro's Latest Twin
by Johan De Gelas on August 3, 2012 4:45 AM EST- Posted in
- IT Computing
- Intel
- Samsung
- Xeon
- Cloud Computing
- Supermicro
Virtualized Cluster Testing
To fully understand the impact of adding more RAM, we test with both two tiles (72GB allocated) and three tiles (108GB). At two tiles, there will be some RAM left in the 64GB setup. ESXi allocates very little memory for the file system caches ("cached" in Windows) and gives priority to the active memory pages, the pages that are actually used by the most important application inside the VM.
To keep the benchmark results easily readable, we standardized all performance to the performance of the system configured with the LRDIMMs (128GB) and two tiles. So that system always scores 100 (%). All other performance numbers are relative to that.
First we check the total throughput. This is the geometric average of the standardized (a percentage relative to our LRDIMM system) throughput of each VM.
The 15% higher bandwidth and slightly better latency of the RDIMMs at 1600MHz allows the RDIMM configured server to outperform the one with LRDIMMs by 6% when running two tiles.However, once we place four more VMs (three tiles) on top of the machine, things start to change. At that point, ESXi had to create "Balloon memory" (10GB) and swap memory (4GB) on the 64GB RDIMM machine. So it is not like we went far beyond the 64GB of active memory. ESXi still managed to keep everything running.
The 128GB LRDIMM server is about 7% faster running three tiles instead of 6% slower (two tiles). That is not spectacular by any means, but it is interesting to delve a little deeper to understand what is happening. To do so, we check out the average latency. The average latency is calculated as follows: the average of the response time of each VM is divided by the response time of the same VM on the LRDIMM system (two tiles). We then calculate the geometric mean of all percentages.
As soon as we add more VMs both systems suffer from higher response times. This shows that we made our test quite CPU intensive. The main culprit is SPECJBB, the VM that we added to save some time, as developing a virtualization test is always a dire and complex undertaking. But trying to save time unfortunately reduced the realism of our test. The problem is that SPECJBB runs the CPU at close to 100%, and as a result the CPU becomes the bottleneck at three tiles. In our final "vApus Mark Mixed" we will replace the SPECjbb test.
We decided to show you this data already until we develop a completely real world benchmark. We are currently evaluating a real world Ruby On Rails website and a Drupal based site. Please feel free to share your opinion on virtualization benchmarking if you are active in this field. Until then, can the extra RAM capacity help even if your applications are CPU intensive? 17% better response times might not be impressive, but there is more to it.
26 Comments
View All Comments
koinkoin - Friday, August 3, 2012 - link
For HPC solutions I like the Dell C6220, dense, and with 2 or 4GB of memory per cpu core you get a good configuration in a 2U chassis for 4 servers.But for VMware, servers like the R720 give you more room to play with memory and IO slots.
Not counting that those dense server don’t offer the same level of management and user friendliness.
JohanAnandtech - Friday, August 3, 2012 - link
A few thoughts:1. Do you still need lots of I/O slots now that we can consolidate a lot of gigabit Ethernets in Two 10GBe
2. Management: ok, a typical blade server can offer a bit more, but the typical remote management solutions that Supermicro now offers are not bad at all. We have been using them for several years now.
Can you elaborate what you expect from the management solution that you won't expect to see in a dense server?
alpha754293 - Friday, August 3, 2012 - link
re: network consolidationNetwork consolidation comes at a cost premium. You can still argue that an IB QDR will give you better performance/bandwith, but a switch is $6k and other systems that don't have IB QDR built in, it's about $1k per NIC. Cables are at least $100 a piece.
If you can use it and justify the cost, sure. But GbE is cheap. REALLY REALLY cheap now that it's been in the consumer space for quite some time.
And there aren't too many cases when you might exceed GbE (even the Ansys guys suggest investing in better hardware rather than expensive interconnects). And that says a LOT.
re: management
I've never tried Supermicro's IMPI, but it looks to be pretty decent. Even if that doesn't work, you can also use 3rd party like logmein and that works quite well too! (Although not available for Linux, but there are Linux/UNIX options available out there as well).
Supermicro also has an even higher density version of this server (4x half-width, 1U DP blade node.)
JonBendtsen - Monday, August 6, 2012 - link
I have tried Supermicro IPMI, works nicely. I can power on/off the machine and let it boot from a .iso image I have on my laptop. This means that in case I have to boot from a rescue CD, then I do not even have to plug a CD drive into the machine. Everything can be done from my laptop, even when I am not in the office, or even the country.bobbozzo - Tuesday, August 7, 2012 - link
Can you access boot screens and the BIOS from the IPMI?For Linux, I use SSH (or VNC server), but when you've got memory or disk errors, etc., it's nice to see the BIOS screens.
Bob
phoenix_rizzen - Thursday, August 9, 2012 - link
Using either the web interface on the IPMI chip itself, or the IPMIView software from SuperMicro, you get full keyboard, mouse, console redirection. Meaning, you can view the POST, BIOS, pre-boot, boot, and console of the system.You can also configure the system to use a serial console, and configure the installed OS to use a serial console, and then connect to the serial console remotely using the ipmitool program.
The IPMI implementation in SuperMicro motherboards (at least the H8DG6/H8DGi series, which we use) is very nice. And stable. And useful. :)
ForeverAlone - Friday, August 3, 2012 - link
Only 128GB RAM? Unacceptable!Guspaz - Monday, August 20, 2012 - link
It starts to matter more when you're pouring on the VMs. With two sockets there, you're talking 16 cores, or 32 threads. That's the kind of machine that can handle a rather large number of VMs, and with only 128GB of RAM, that would be the limitation regarding how many VMs you could stick on there. For example, if you wanted to have a dedicated thread per VM, you're down to only 4GB per VM, which is kind of low for a server.darking - Friday, August 3, 2012 - link
I think the price on the webpage is wrong. or atleast it differs by market.i just checked the Danish and the British webstores, and the 32GB LRDIMMS are priced at around 2200$ not the 3800$ that the US webpage has.
JohanAnandtech - Friday, August 3, 2012 - link
They probably changed it in the last few days as HP as lowered their price to $2000 a while ago. But when I checked, it was $3800