My first attempt to re-purpose an old PC into a nested Virtualization Lab had issues with CPU contention. So I converted that machine into a Storage Server running FreeNAS on top of ESXi. I, then, set out to build a new PC for my Lab.
The first task in building the PC was deciding on what capabilities it needed to support and how much I was willing to spend on it. After some rough calculations on an excel spreadsheet, I figured that I was comfortable shelling out about $600 for it. The primary goal was for it to have enough computing horsepower and memory to be able to create a reasonably speedy virtualization testbed. Also, a secondary goal while selecting the components was to be able to convert this new PC with some upgrades into a gaming PC capable of playing some strategy games in the near future.
With a budget and purpose decided, I set about to pick the components. The Processor selection was the paramount decision to make as selecting the processor implicitly lead to a socket decision which would narrow the variety of motherboards to pick from. As cost and performance were both constraints, I decided to look at various price and performance benchmarks and reviews. AMD’s FX and Phenom II series of processors got high points on the price side, whereas the Intel’s Core i7 and i5 series got high marks on the overall performance side.
Both the generations of core i7s (Sandy Bridge and Ivy Bridge) cost over $200 just for the processor and six-core version was listed at over $500. The Core i5s were little cheaper hovering between $180 and $220.The AMD FX series with 8-cores were available from about $140 for the older Zambezi core while the newer Vishera ones started at $170. Solely from a price standpoint the choice was between an Intel i5 and an AMD FX 8-core. The core i5s beat the FX processors on almost all single threaded application benchmarks while the FX processors have superior performance in multi-threaded benchmarks. The FX processors even beat the i7 on the 7-zip benchmarks. I decided to go with the FX processor as its price point gave it a significant advantage in a performance per dollar perspective. Although, if you were planning to run the FX processor 24/7 at a high load you would want to factor in the higher power and cooling costs of the FX ( as it has a 30W higher TDP than the intels) in your cost calculations. The Review of the Vishera processors at AnandTech provides detailed analysis of the benchmarks and processor architecture pros and cons.
With the processor decision done, I had to decide on the motherboard. The FX processor requires an AM3+ socket motherboard. AM3+ sockets are supported by motherboards that run on the AMD series 7 through 9 chipsets with the cheaper motherboards based on the AMD 7- series chipset while the higher end motherboards use the 990FX chipset. I wanted to go with a motherboard that allowed for most memory as well as PCI Express x16 cards at a reasonable price.This narrowed the choice to motherboards with 970 or 990 chipsets. 970s started around $70, while the 990s were all $90 and above. The 990FX boards provided for greater expansion capabilites for graphics cards which would be good to have as a future gaming PC. While looking around, I found that Microcenter had a deal where they gave you $50 off the motherboard if you bought both the motherboard and the processor from the them. This put the 990FX based motherboards within my reach. I decided to go with the Gigabyte GA-990FXA-UD3 motherboard and FX-8320 processor bundle costing about $250 as this motherboard had 32GB max memory support and 4 PCI-e x16 slots.
While, I was in-store at Microcenter to pickup my web order, I found that a Samsung 840 series 120GB SSD could be bought for $79 when purchased with a motherboard or processor. After a brief online search on my phone, I decided to add that to my checkout as it was a great deal. I also added a Gigabyte Radeon HD 6670 graphics card which was on sale for $50 after rebate as I thought it would be make a good starter gaming graphics card.
To round up the remaining components for my build, I was looking around online, when I found 16GB Corsair Vengenance LP 1600Mhz DDR3 RAM kit for about $66 at Newegg. I also found a Zalman Z5 Plus mid tower case with 3 in-built 120mm fans providing more than adequate cooling for the system, which was available for $34 after rebate.
While searching for a power supply, which provided not only adequate power for my current rig but also significant headroom for future expansion, I came across the Topower 80 Plus Silver certified 800W power supply for $60. This provided an efficient power house with around 300+ Watts of headroom for future expansion.
Below is a listing of the various components purchased with their prices, savings and rebates.
The purchases at Newegg and Microcenter allowed me gather all the required parts to build the virtualization test bed. Once all the parts had arrived, I set aside a couple of hours to put the parts together and complete the build. During the Build, the only decision I had to make was on how to install the power supply within the ATX case. Since this was a bottom mounted power supply case, I could install the Power supply with its fan facing up or fan facing down. With the Fan facing down, it would suck in cooler outside air into the power supply but would need to be placed on a flat surface. With the power supply facing upwards, it would suck hotter air from within the case but would not need ground clearance and could be placed on carpet. Since I was going to place the PC on carpet, I mounted the power supply with the fan facing up.
With the build complete, the next challenge was to load VMware ESXi onto the PC. There was no DVD drive bought for the PC and my existing DVD drive, which was an IDE drive, could not be attached to the motherboard as it did not support IDE and had only SATA ports. I did not want to get a new DVD drive, so I decided to use a USB drive to load ESXi. I used UNetbootin, an open source utility, to create a bootable USB stick from the downloaded VMware ESXi ISO.
I connected my new PC via HDMI to my display and attached the created USB Stick and booted up the ESXi installer. In a matter of 10-15 minutes, I had ESXi installed on the SSD drive. Once ESXi was installed, I used the ESXi Direct Console User Interface (DCUI) to configure a static IP, Gateway and DNS for the PC. My Physical ESXi Box was now ready for building a nested lab on. I logged into the ESXi via the VMware vCenter Client and created my first nested ESXi Host. I remembered to add the vhv.enable = ‘TRUE’ setting in the vmx file prior to booting up the nested or inner ESXi host.
Over the past few weeks, I have been taking a VMware vSphere class in preparation for the VCP5 exam at the local community college. Acing the test requires in-depth understanding and familiarity with the various VMware vSphere components. Having a VMware Lab where one could get hands on experience with the various components and their features was something that was recommended in class. A basic VMware setup at minimum required a couple of ESXi hosts, a vSphere vCenter Management Server and some of form of network attached storage (NFS or iSCSI ). A nested lab, which allows one to create all the various servers and hosts on a single physical machine, was the cheapest way to achieve the setup.
I had an unused machine lying around. Its prior avatar was that of an HTPC. The PC had decent specifications considering that it was built in 2009:
Using a trial version of VMware vSphere,installed vSphere ESXi 5.1 on the machine. It booted up fine and obtained an IP from my DHCP server. Using vSphere windows client, created three VMs. Installed ESXi on 2 of the VMs to create the nested ESXi hosts. While doing this i realized that the RAM on the machine wouldn’t be enough for even a single nested ESXi host to function properly, so I installed an additional 8GB of RAM using a 2 x 4 GB Corsair Vengence LP kit.
After that on the third VM, I installed OpenFiler initially but was unable to find any freely available documentation on how to configure it. On doing a bit of research on the other storage options, I discovered FreeNAS, an open source storage platform based on the FreeBSD OS. Documentation was available and it indicated that it was easily installed and also configurable via web based interface. Nuked my OpenFiler VM and created a new one with FreeNAS installed on it. Allocated half my 1 TB HDD to the FreeNAS virtual harddrive and 2GB of virtual RAM to the VM. It was easy to find my way around the web based configuration interface and the documentation had step by step instructions on how to configure the various storage services. I created an iSCSI LUN and NFS share on it. The NFS would host the file system for all my installers and ISOs, while the iSCSI LUN would provide the storage for the VMs on my nested ESXi hosts. Created a software-based SCSI adaptor on the ESXi hosts and configured the location of FreeNAS server on them. The LUN was easily discovered and ready for use as the datastore for my VMs.
Dowloaded a trial copy of RHEL 6.3 ISO on the NFS share and mounted the ISO on the first VMs virtual CDROM drive.When the VM booted up, an error popped up indicating that I could only use 32-bit VMs as the hardware of the ESXi host didnt support virtualization. But, I knew the processor had AMD-V support. Had to shut down the physical ESXi host to enable Virtualization support for the processor in the BIOS. Even after doing this, the VM wouldnt boot a 64-bit OS. Searches on VMware forums indicated that the vhv.allow parameter needs to be set to true in the vmx configuration file of the outer ESXi host in a nested configuration. Once this was done RHEL 6.3 booted up and installed in a few minutes.
I used the ova file for vCenter Server appliance on the VMware website to create my vCenter Server VM. On booting the vCenter server, it came up fine to a linux prompt. After configuring the vCenter Server with default options using the wizard in the web based configuration interface, I was able to log into the vCenter server from the vSphere Client. Created DCs and imported hosts on the vCenter server. Performing these actions on the vCenter Server took a lot of time compared to my labs in the class. On checking metrics on my physical host using the performance tab in the vSphere Client, I found that CPU usage was very high almost 100% frequently along with very high ready times in the order of several seconds rather than milliseconds.The ready percentage was around 50% during vCenter Server operations. This indicated severe contention for CPU resources. I concluded that using this host having just 2 logical cpus for my Lab was not feasible. I needed more CPUs.
I decided to convert this host to a standalone FreeNAS storage server and build my Lab on a new PC.Thus, began the build of a new PC to host my nested Lab.
About a few months ago, Oncor the local TDSP in my area, installed smart meters. A few days later, I got a letter in the postbox indicating that I could go to a website and see my electricity usage in real time. I attempted to create an account the next day but was informed that it could take upto 60 days from meter installation for the system to be able to register my new meter. That was sometime in late October.
Over the Christmas break, I finally got to registering on smartmetertexas.com and was able to get access to my electricity usage. The website provides you the ability to see your usage at a granularity of every 15 minutes. You can also get monthly and daily aggregations.
I also noticed that the website allowed you to add HAN (Home Area Networking ) devices to your profile. These devices could talk to your smart meter and help display and also manage your electricity usage.
An interesting article explaining in brief how these devices work can be found at http://www.emeter.com/smart-grid-watch/2010/han-smart-meter-interface-what-can-we-expect/
Some online research indicates that the ZigBee Protocol (which I remember vaguely from some class in my graduate degree) as being used for the HAN radio of the smart meter.
With a HAN enabled display, you could view your electricity usage in real time in the house. HAN enabled appliances such as programmable thermostats could allow you to manage cost by modifying temperature within set limits to reduce electricity usage during peak hours. Also you could use your smart dishwasher to only do loads in off peak hours. I am still researching the availability of these HAN enabled devices and analyze if these capabilities are worth the added cost of these devices over regular ones.