A little over six months ago I started researching the quickly emerging world of “Hyper-converged” infrastructure as a new IT ethos to take my company’s IT operations into the next decade of application and data hosting. I was first introduced to the idea when I attended a webinar from Simplivity, one of the leading companies in this new market. I was immediately intrigued… the underlying question that Hyper-convergence answers is “What happens if you radically simplify the data-center by collapsing storage and compute into one all-encompassing platform?

The broader idea of convergence has been around for a while. I started seeing it with the advent of wider 10 Gbe adoption; the idea of taking a single (or +1 for redundancy) high-speed LAN connection and splitting up into multiple logically separate connections. I.E. you could “converge” management, application, and storage LAN connections down into a single wire. The wider over-arching concept predates even this.

Expand that thought into the virtualization space. Virtualization has been around for a very long time but traditionally if you wanted to get some kind of fault-tolerant system setup it required a complex stack of centralized network attached storage, management software, and a clustered hypervisor. Not to mention (often) separate networking equipment for both the storage and hypervisor nodes.

The promise of hyper-convergence is that many of those disparate parts can go away and instead you can host your workloads on a unified, easily scaled, inherently redundant, platform that encompasses all of your storage and compute needs while simplifying a majority of your networking. Wikipedia sums it up nicely. Rather than reinventing the wheel I will just refer you there if this is a new concept: https://en.wikipedia.org/wiki/Hyper-converged_infrastructure

Hyper-convergence is a rather elegant answer, especially if the product is designed from the outset to BE a hyper-converged platform. My premise in this article is that Scale Computing is one of the few (perhaps the only?) “proven” vendors that have developed a product from the ground up as a hyper-converged system. Based on a lot of the FUD I came across while researching, I got the distinct impression that a large number of people don’t understand this fundamental difference between Scale and the majority of other HCI products currently populating the market. This is a long post, get coffee now… (more…)

I have a limited number of IPv4 addresses available to me on my servers. So I am really frugal with how I assign them.

Whenever possible, my preference is to use NAT off of the main Proxmox IP. However I struggled to get this setup while also using the built-in Proxmox firewall that comes in version 4.0. Having an enabled firewall is an absolute requirement for me.

In this article I have documented the final working solution. (more…)

The company I work for is a relatively small shop when it comes to virtualization and especially when it comes to Hyper-V. So that means I am usually working on individual host servers and not doing any kind of grand scale configuration using SCCM or some other enterprise level tool. I think most folks in small-to-medium size businesses with existing infrastructure probably have a similar “use-case scenario” when it comes to Hyper-V.

We use Hyper-V primarily for development and test servers and often enough I get asked to deploy new servers. Now, the way I used to go about doing this was to create a new blank server, new empty VHD file, insert Server 2012 (or 2008 R2 or whatever…) CD/DVD ISO file and install from scratch. In this case, the actual install isn’t all that bad. Server 2012 particularly installs quite quickly. However downloading and installing all of the bloody Microsoft updates can take hours, tack onto that configuring the server for our environment and well, it gets to be a couple hours of work at least.
(more…)