Building the 2012 AnandTech SMB / SOHO NAS Testbed
by Ganesh T S on September 5, 2012 6:00 PM EST- Posted in
- IT Computing
- Storage
- NAS
Introduction & Goals of the Build
The market for network attached storage (NAS) devices has registered huge gains over the last few years. In keeping up with the market trends, the coverage of NAS units has also seen an uptick on AnandTech since the middle of 2010. Followers of our NAS reviews have seen the standard Intel NASPT benchmarks and file transfer test results along with a qualitative coverage of the NAS’s operating system / user interface. The reviews briefly touch upon miscellaneous factors such as power consumption. The feedback from the readers as well as the industry pointed towards some essential NAS aspects such as performance under loading from multiple clients being ignored. Towards the end of 2011, we started evaluating approaches to cover these aspects.
Our goal was to simulate a SMB (Small to Medium Business) / SOHO (Small Office / Home Office) type environment for the NAS under test. From the viewpoint of our testing, we consider a SMB as any setup with 5 - 25 distinct clients for the NAS. Under ideal circumstances, we could have had multiple PCs accessing the NAS at the same time. However, we wanted a testbed which didn’t require too much space or consume a lot of power. It was also necessary that the testbed be easily administered. These requirements ruled out the possibility of multiple distinct physical machines making up the testbed.
In order to set up multiple virtual machines (VMs), we wanted to build a multi-processor workstation. One of the primary challenges when running a large number of VMs on a single machine is the paucity of resources. It is important not to be disk bound. Therefore, we set out with the intent of providing each VM with its own processor core, physical primary disk and network port. After taking a look at the options, we decided to build a dual processor workstation capable of running up to 12 VMs. In the first four sections, we will take a look at the hardware options that we chose for the build.
Following the discussion of the hardware aspects, we have a section on the software infrastructure. This includes details of the host and guest operating systems, the benchmarking software and scripts used in the testing process. We initially gave a trial run of the new test components on two different NAS units, the Synology DS211+ and the Thecus N4800. Results from the new test components are presented in the two sections preceding the concluding remarks.
74 Comments
View All Comments
Tor-ErikL - Thursday, September 6, 2012 - link
As always a great article and a sensible testbench which can be scaled to test everything from small setups to larger setups. good choice!However i would also like some type of test that is less geared towards technical performance and more real world scenarios.
so to help out i give you my real world scenario:
Family of two adults and two teenagers...
Equipment in my house is:
4 latops running on wifi network
1 workstation for work
1 mediacenter running XBMC
1 Synollogy NAS
laptops streams music/movies from my nas - usually i guess no more than two of these runs at the same time
MediaCenter also streams music/movies from the same nas at the same time
in adition some of the laptops browse all the family pictures which are stored on the NAS and does light file copy to and from the NAS.
The NAS itself downloads movies/music/tvshows and does unpacking and internal file transfers
My guess for a typical home use scenario there is not that much intensiv file copying going on, usually only light transfers trough mainly either wifi or 100mb links
I think the key factor is that usually there are multiple clients connecting and streaming different stuff that is the most relevant factor. at tops 4-5 clients
Also as mentioned difference on the different sharing protocols like SMB/CIFS would be interesting to se more details about.
Looking forward for the next chapters in your testbench :)
Jeff7181 - Thursday, September 6, 2012 - link
I'd be very curious to see tests involving deduplication. I know deduplication is found more on enterprise-class type storage systems, but WHS used SIS, and FreeNAS uses ZFS, which supports deduplication._Ryan_ - Thursday, September 6, 2012 - link
It would be great if you guys could post results for the Drobo FS.Pixelpusher6 - Thursday, September 6, 2012 - link
Quick Correction - On the last page under specs for the memory do you mean 10-10-10-30 instead of 19-10-10-30?I was wondering about the setup with the CPUs for this machine. If each of the 12 VMs use 1 dedicated real CPU core then what is the host OS running on? With 2 Xeon E5-2630Ls that would be 12 real CPU cores.
I'm also curious about how hyper-threading works in a situation like this. Does each VM have 1 physical thread and 1 HT thread for a total of 2 threads per VM? Is it possible to run a VM on a single HT core without any performance degradation? If the answer is yes then I'm assuming it would be possible to scale this system up to run 24 VMs at once.
ganeshts - Thursday, September 6, 2012 - link
Thanks for the note about the typo in the CAS timings. Fixed it now.We took a punt on the fact that I/O generation doesn't take up much CPU. So, the host OS definitely shares CPU resources with the VMs, but the host OS handles that transparently. When I mentioned that one CPU core is dedicated to each VM, I meant that the Hyper-V settings for the VM indicated 1 vCPU instead of the allowed 2 , 3 or 4 vCPUs.
Each VM runs only 1 thread. I am still trying to figure out how to increase the VM density in the current set up. But, yes, it looks like we might be able to hit 24 VMs because the CPU requirements from the IOMeter workloads are not extreme.
dtgoodwin - Thursday, September 6, 2012 - link
Kudos on excellent choice of hardware for power efficiency. 2 CPUs, 14 network ports, 8 sticks of RAM, and a total of 14 SSDS idling at just over 100 watts is very impressive.casteve - Thursday, September 6, 2012 - link
Thanks for the build walkthrough, Ganesh. I was wondering why you used a 850W PSU when worst case DC power use is in the 220W range? Instead of the $180 Silverstone Gold rated unit, you could have gone with a lower power 80+ Gold or Platinum PSU for less $'s and better efficiency at your given loads.ganeshts - Thursday, September 6, 2012 - link
Just a hedge against future workloads :)haxter - Thursday, September 6, 2012 - link
Guys yank those NICs and get a dual 10gbe card in place. SOHO is 10Gbe these days. What gives? How are you supposed to test SOHO NAS with each VM so crippled?extide - Thursday, September 6, 2012 - link
10GBe is certainly not SOHO.