So part of my “poor-man’s hyper-v cluster” experiment in my home office here has led me to start looking into storage options for virtual platforms. Hyper-V is apparently quite flexible, however fail-over clustering limits your options.
So for those of you who are just joining us I am doing research on clustered Hyper-V for work. This was a self started project so I grabbed whatever I had available. I am therefore building an Active Directory managed network and a three node Hyper-V cluster using the following components…
Dell Latitude D830 Laptop – Intel Core Duo + 3 GB of RAM + 150 GB HD
Dell Latitude E6400 Laptop – Intel Core 2 Duo + 4 GB of RAM + 230 GB HD
Dell Optiplex 990 Mini-PC – This is my “top of the line” unit lol… Core i7 – 4 GB RAM – 160 GB HD
Ancient TP-Link N150 router – 4 wired ports of 100 mbit bliss… (no gigabit :(…)
Surprisingly enough, even the ancient D830 has a processor that is new enough to run Hyper-V 3.0 on Windows Server 2012R2. This will only work with the server version of the OS though because of no support for SLAT which is an added requirement of the CPU if you are going to run Hyper-V on Windows 8.1. Only the Core I7 has SLAT built-in.
Another interesting note, the E6400 with the Core 2 Duo was by far the biggest pain to get working. Hence I am noting it here for anyone that comes searching…
–NOTE ABOUT DISABLING TRUSTED EXECUTION ON DELL LATITUDE E6400 LAPTOP–
Can’t enable hyper-v role service on Dell Latitude E6400 laptop? Here is why… Trusted Execution needs to be TURNED OFF in BIOS. This is definitely a Dell specific glitch. So, reboot into bios, turn on the TPM, reboot, go into BIOS, ENABLE the TPM (two separate steps) and then under virtualization options turn on everything except for Trusted Execution. Then it will work. Okay… moving on…
Okay… this was supposed to be a post about storage. So lets talk storage.
You probably noticed from my rather horrid list of sad components above that I won’t be hosting a lot of VM’s… This is definitely more “proof-of-concept” work for me and if it goes well it might actually end up on better hardware in a data center.
Particularly slim is my hard drive space. I don’t have enough spare computer parts lying around to build my own SAN. Which was a thought I initially had. However while researching storage option I came across a curious feature in Windows Server that has actually been around since the days of Server 2008… enter Microsoft iSCSI Software Target. Microsoft built iSCSI SAN like functionality into Windows Server and they have enhanced it a lot in server 2012R2. What makes it particularly interesting to me is that it supports fail-over clustering. You can read a fun article on the software here:
Basically, I am going to stick the role on all three “servers” and see if I can make it part of my three node fail-over. It was all going so well until I came to an annoying revelation… iSCSI really needs its own NIC ports and they highly recommend Gigabit connections for obvious reasons. I am working with old laptops here… they only have one NIC port each and attempts at getting their ancient wireless cards working with Server 2012R2 have been fruitless.
I have been part of the android/china tab scene since before it was cool… (I know… it still really isn’t that cool.) Part of my time with that involved testing USB LAN adapters. I have a few lying around. Seems like an easy way to get more NICs. Unfortunately the adapters I have are all mini-usb…
So, courtesy of my work, I have three regular USB LAN adapters coming and… a 16-port gigabit switch! Honestly it does feel like cheating as the goal was to use what I had on hand (I get a kick out of re-purposed hardware…). That being said, this experiment would have been over without the switch and at least some USB adapters.
This whole experiment started out for me as a desire to just do “clustered hyper-v”… But with Microsoft’s new (or perhaps old but reinvigorated…) iSCSI Software Target technology, the question I am asking is can I have complete server redundancy completely within the cluster, including the storage? If so, this is kind of amazing because it means dedicated SAN’s are not necessarily a crucial part of the virtualization platform. That has deeper impact than you might first suspect. If we kick out the dedicated SAN, we reduce complexity. Whenever you reduce complexity, things cost less up front, cost less upkeep, and tend to be more reliable. This is obviously not going to fit a lot of use-cases though. When your components are dedicated parts performance tends to be better, and you scale specific parts of the infrastructure more easily. It is give and take as it always has been.
One more linked article for you before I sign off.. I came across this one while researching the pros and cons of going iSCSI vs. Fibre. Obviously I am not going Fibre Channel in my office, but it was a good read all the same. This particular article also brought up NAS as a storage target for VM platforms however I don’t think that would work for clustering.
More updates to come… keep praying that nothing catches fire and that I can keep my toddler from sucking on the power cables and we might have a cluster yet…