I have decided to give Ubuntu 17.04 LTS Desktop a go. On a whim I installed it on a laptop I had lying about (being an IT person they tend to proliferate over a given period of time in my office… older units becoming doorstops, newer units lovely “Jenga” blocks and maybe the occasional Proxmox cluster…) Since this seems to be the final days of Unity (which I actually don’t mind as a Desktop all that much), I figured now was a good time to take another poke at it as a daily personal driver. I was happy to come across an option for full disk encryption during the install process and wanted to pass my few thoughts on it along. (more…)
I will keep this short and sweet. We have servers in our environment that have multiple IP addresses assigned to a single NIC. That’s normally just fine. However on occasion I will have very strange issues occur where essentially all networking appears to be working and yet web browsing won’t work. I can ping my default gateway, ping other systems in the same subnet, telnet out on port 80 and 443, etc, etc. But the network connectivity still behaves oddly. What’s the issue?
It all has to do with networking logic decisions made many years ago (I believe as far back as Windows Server 2000) by someone at Microsoft. (more…)
I run Ubuntu 14.04 LTS servers on Hyper-V.
If you do a fairly routine install you will end up with an anemic ~250 MB boot partition. The boot partition (located at /boot) stores your Linux Kernel and everything the system needs to boot. I am sure you could have guessed that.
Next, if you have automatic security update downloading enabled, then your system is downloading new versions of the ubuntu/linux kernel on a regular basis to this partition.
What should happen is that only a few versions of the kernel should be kept on hand and there is supposed to be some kind of job that cleans this up from time to time.
In reality that cleanup function seems to be bugged in Ubuntu 14.04 LTS so the boot partition will hit 100% full frequently and it is a constant game of manually keeping that partition clean… or else! (or else it turns into a real headache when it hits 100% full). (more…)
This lovely item came across my feed today and I realized there is so much that I do poorly when it comes to shell scripting it is absurd 🙁
Anyhow I am extremely thankful for this long and detailed post and wanted to pass it along to my readership.
I also wanted to crowd-source a bit of information from you all.
I am familiar with the idea of a code repository but have never really used one. I have had a lot of people suggest I use GIT. I am curious if that is the general consensus or if anyone else has other suggestions? I have a project that consists of some pretty crazy scripts (well, crazy for me, 1000+ lines). Trying to keep track of versions and changes in a script that large is difficult to say the least. I am looking for something easy to use and quick to deploy. Thoughts welcome.
A little over six months ago I started researching the quickly emerging world of “Hyper-converged” infrastructure as a new IT ethos to take my company’s IT operations into the next decade of application and data hosting. I was first introduced to the idea when I attended a webinar from Simplivity, one of the leading companies in this new market. I was immediately intrigued… the underlying question that Hyper-convergence answers is “What happens if you radically simplify the data-center by collapsing storage and compute into one all-encompassing platform?
The broader idea of convergence has been around for a while. I started seeing it with the advent of wider 10 Gbe adoption; the idea of taking a single (or +1 for redundancy) high-speed LAN connection and splitting up into multiple logically separate connections. I.E. you could “converge” management, application, and storage LAN connections down into a single wire. The wider over-arching concept predates even this.
Expand that thought into the virtualization space. Virtualization has been around for a very long time but traditionally if you wanted to get some kind of fault-tolerant system setup it required a complex stack of centralized network attached storage, management software, and a clustered hypervisor. Not to mention (often) separate networking equipment for both the storage and hypervisor nodes.
The promise of hyper-convergence is that many of those disparate parts can go away and instead you can host your workloads on a unified, easily scaled, inherently redundant, platform that encompasses all of your storage and compute needs while simplifying a majority of your networking. Wikipedia sums it up nicely. Rather than reinventing the wheel I will just refer you there if this is a new concept: https://en.wikipedia.org/wiki/Hyper-converged_infrastructure
Hyper-convergence is a rather elegant answer, especially if the product is designed from the outset to BE a hyper-converged platform. My premise in this article is that Scale Computing is one of the few (perhaps the only?) “proven” vendors that have developed a product from the ground up as a hyper-converged system. Based on a lot of the FUD I came across while researching, I got the distinct impression that a large number of people don’t understand this fundamental difference between Scale and the majority of other HCI products currently populating the market. This is a long post, get coffee now… (more…)