Today I am working on setting up a BackupPC server to take remote internal centralized backups of some of our other servers on the cheap.
I already had BackupPC installed and the basics configured but I needed to add a new drive to the system (for additional backup data storage) and I also needed to setup a new NIC connection. My Ubuntu Server is running on Microsoft Hyper-V 3.0 on a Server 2012 host machine so adding all the new hardware was as simple as a few clicks.
Normally I am a command-line guy but this server is going to be managed ongoing by folks who are less Linux savvy so I wanted to install some additional software that would make their life easier. To that end, I am using Webmin.
During the course of adding additional storage to my VM I ran into some headaches related to Hyper-V and Linux storage formatting of GPT disks larger than 2 TB.
Sounds like a very specific use case? I think it is quicky becoming more common as A.) Storage gets cheaper and therefore larger and B.) Microsoft Hyper-V sees more adoption as it is now decently featured and has attractive pricing for people with existing Windows infrastructure. Hopefully this article will help you avoid the trouble I ran into when setting up a new large disk on an Ubuntu Hyper-V VM…
I have been seeing some really odd issues lately on a couple of our Hyper-V hosts. I initially had chocked the issues up to slow storage access speeds caused by a possibly defunct storage controller on one particular server. However, recently the issues got worse. I was getting random errors that were preventing me from deleting snapshots. After a quick google search the answer came back as “turn off your server’s anti-virus”. Sure enough, turning the anti-virus software off cleared up the issues I was having.
This is just one more thing that persuades me that Antivirus software tends to cause more problems than it prevents. I get the argument for it. Especially on endpoints where PEBKAC issues run rampant… But it causes a lot of headaches…
Ran into an issue to day installing Server 2012 R2 from an ISO file onto a fresh/brand new VM.
"0xE0000100: "Windows installation encountered an unexpected error. Verify that the installation sources are accessible, and restart the installation."
A little Google-Fu fixed me right up. New VM’s only have 512 MB of “startup memory” and you need to set it to at least 1024 MB to allow the installation to proceed without error.
I had Dynamic Memory setup with a range of 512 MB to 8192 MB and thought I would be okay. Not so. Anyhow, I just statically gave it 4 Gb for the install and changed it around once the install was finished. That being said, I am keeping the startup memory at 1024 MB now for all Server 2008R2+ VM’s.
I love Hyper-V 3.0… particularly compared to earlier versions. It comes packed with some very nice new features, several of which are geared around the idea of thin deployment. One such feature is Dynamic Memory. Dynamic Memory allows you to set a base “Starting” amount of RAM for a server (say something low like 512 MB) and then also set a max amount it can take up (say 8 Gb). The idea is that you can over-provision RAM on a Host server and still be okay if the majority of your VM’s are usually just sitting their idle. Which in most cases they usually are. The problem is that on the client machine, if you are running Windows Task manager at least, you will almost always see 90 – 95% memory utilization and it will show whatever the max is that your server can scale to (say 8 GB).
This really threw me off recently. I had one VM that was misbehaving due to having the VHDX file on a slow share on a storage array. Initially, not knowing what was broken, I took a look at task manager on the VM (which was running Server 2012) and noted that it was showing nearly Maxed out RAM usage. Further investigation though showed that it couldn’t possibly be using more than 1 Gb (of the 8 Gb shown in task manager) at any one time.
After some further investigation I learned that this is common behavior on VM’s that are allocated memory dynamically and nothing to be concerned with. The VM today still has dynamic memory and still shows 95% usage pretty much all the time but runs just fine now that the VHDX file has been moved to faster storage. Anyhow, hope this helps someone else out!
The company I work for has some rather remote offices and we are in the process of virtualizing some of our infrastructure components, particularly our remote domain controllers. I have done a remote DC deployment in one of our other foreign offices and the replication of the Domain took quite a while. In that case, I didn’t realize I would be rebuilding a domain controller in virtual until after I showed up at the office. This time though I know what I am going into. So… the goal? Build the DC here as a Hyper-V VM, export it to an encrypted drive, take it with me, and re-import the VM to the new Hyper-V server I will be putting in on the other side. I realize I will need to make some DNS updates as the AD server’s IP will be changing but, based on what I have read, I think this should go pretty smoothly! Wish me luck!