So we are all probably familiar with the Heartbleed bug at this point. Remediating this issue on a couple of servers that I admin required moving from Ubuntu 13.04 to 13.10. I am going to go into a few of the problems I ran into when I made the jump…
(more…)

Media Temple does some stupid interesting things with its linux installs. Particularly it seems like they don’t like clients upgrading their own boxes… So they don’t include some core packages that are needed to run the automated upgrade on Ubuntu. Now, I got on the support line and the front line cannon fodder (aka Tier 1 support) gave me the following entertaining fiction to avoid helping:
(more…)

The Heartbleed bug is what I would professionally classify as seriously scary stuff. Basically there is some kind of heartbeat functionality built into OpenSSL. Often, in tech talk, this kind of thing is used for remote service monitoring (i.e. if I have a pulse my service is at least up). In this case, I am not sure the specific application. What was discovered was that this feature could be exploited by a remote hacker to gain access to information stored in the servers memory in 64k chunks at a time… and it was undetectable. Pretty much anything could have been leaked, including the SSL private keys themselves. This means a person could then decrypt in-transit information between the server and client and have access to all kinds of goodies like usernames and passwords. (you can read more here)

This is a veritable nightmare and affects pretty much all of us running websites on linux hosts that make use of SSL. So, I highly recommend you dive in quickly and fix this issue, fast. The first step…

Test your site to see if is vulnerable to the heartbleed bug. You can check it using the tool here:

http://filippo.io/Heartbleed/

Okay… don’t flip out if you are vulnerable. Pretty much all of my SSL sites were as they use OpenSSL like the other 2/3rds of all internet sites. I have since patched them all.

I host hosted on pretty much all Ubuntu 13.04 servers running Apache 2. My instructions for fixing the bug are geared towards Ubuntu.

Second step… back your SSL keys up!

We are going to be completely wiping your current SSL install OUT on your web server. This means you need to backup your cert files. Here is how I did mine on one particular server. (later on it is a very good idea to regenerate your server’s private key and replace all of your SSL keys anyhow, but it isn’t a bad idea to keep what you have around…)

First, make a directory tree where you can copy all of your certs to.

sudo -s
mkdir /sslbackup
mkdir /sslbackup/private
mkdir /sslbackup/certs

Next, if you are running Apache, which I am, you can determine where the SSL cert files are that you need to backup by doing the following (this is for apache2, running on Ubuntu 13.1, your site config files may be somewhere else):

cd /etc/apache2/sites-enabled

On Ubuntu and Debian this folder is usually used to house the “running config” files for each of your websites. Any of your sites that use ssl (i.e. the url is httpS://whatever.com) need to be checked. In my example lets say I have a site called contoso.com and it has a config file named contoso.com-config… Run the following command against that file:

egrep -wi --color 'SSLEngine|SSLCertificateFile|SSLCertificateKeyFile|SSLCertificateChainFile' contoso.com-config

Run that command for each of your site config files and note the output. You should get something like this:

SSLEngine on
        #   SSLCertificateFile directive is needed.
        SSLCertificateFile    /etc/ssl/certs/contoso.crt
        SSLCertificateKeyFile /etc/ssl/private/contoso.key
        #   Point SSLCertificateChainFile at a file containing the
        #   the referenced file can be the same as SSLCertificateFile
        SSLCertificateChainFile /etc/ssl/private/contosoint.crt

There are three files in the above case. Public and private keys and the chain file. You can now back those files up to your backup directories. Example:

cp /etc/ssl/certs/contoso.crt /sslbackup/certs/contoso.crt
cp /etc/ssl/private/contoso.key /sslbackup/private/contoso.key
cp /etc/ssl/private/contosoint.crt /sslbackup/private/contosoint.crt

That should backup all of your cert files safely so that when we totally wipe-out OpenSSL we don’t have to worry about losing our cert files.

Next, it is time to remove OpenSSL. In my case I am on Ubuntu so I use the ‘apt’ package manager. Your distro may vary.

apt-get update
apt-get purge openssl     #This is where your certs may get wiped if you didn't back them up above
apt-get purge libssl-dev
apt-get autoremove && apt-get autoclean

Next we need to update to the latest version of openSSL. I was running Ubuntu 13.04 on my server though and this was an issue because it is no longer support. To fix this problem, I upgraded to 13.10 by doing the following:

do-release-upgrade

If that works for you, GREAT… I, however, am hosting on Media Temple and had no such luck. To work around I had to do this:

apt-get install update-manager-core python-apt
cd ~
mkdir tmp && mkdir vartmp && mount --bind ~/tmp /tmp && mount --bind ~/vartmp /var/tmp
do-release-upgrade

The upgrade then ran…

Afterwords I did the following

apt-get update
apt-get install openssl libssl-dev
service apache2 restart

Now… after this was done… I had all kinds of additional personal hell. This came particularly from the upgrade of Ubuntu to 13.04 to 13.10. I did this on two separate servers and it broke all kinds of things. More on that in my next post!

Assuming you didn’t have to upgrade Ubuntu then you can go back to the heartbleed vulnerability checking site:

http://filippo.io/Heartbleed/

And test your site again. Once I resolved my other issues and had my sites back up again, this showed the vulnerability to be remediated!

Now… the HOLE has been closed. Which is great. However the particularly scary thing about this portal of doom is that there was no way to detect if anyone had ever exploited it. Yep… people could have stolen stuff and there is no way to know. They could have stolen your SSL private keys… hence… the recommendation now that we have slammed the door shut is to:

-> Regenerate new private keys.
-> Submit new CSR to your CA.
-> Obtain and install new signed certificate.
-> Revoke old certificates.
(information taken from discussion here)

I will not be going into the details of how to do all of that in this post.

References:
http://heartbleed.com/
http://www.mysqlperformanceblog.com/2014/04/08/openssl-heartbleed-cve-2014-0160/
http://docs.openvpn.net/heartbleed-vulnerability-on-access-server-1-8-1-2-0-5/
http://serverfault.com/questions/587329/heartbleed-what-is-it-and-what-are-options-to-mitigate-it
http://askubuntu.com/questions/429385/upgrade-openssl-on-ubuntu-12-04
https://www.openssl.org/source/
http://www.cyberciti.biz/faq/searching-multiple-words-string-using-grep/
http://askubuntu.com/questions/444702/how-to-patch-cve-2014-0160-in-openssl

Today I am working on setting up a BackupPC server to take remote internal centralized backups of some of our other servers on the cheap.

I already had BackupPC installed and the basics configured but I needed to add a new drive to the system (for additional backup data storage) and I also needed to setup a new NIC connection. My Ubuntu Server is running on Microsoft Hyper-V 3.0 on a Server 2012 host machine so adding all the new hardware was as simple as a few clicks.

Normally I am a command-line guy but this server is going to be managed ongoing by folks who are less Linux savvy so I wanted to install some additional software that would make their life easier. To that end, I am using Webmin.

During the course of adding additional storage to my VM I ran into some headaches related to Hyper-V and Linux storage formatting of GPT disks larger than 2 TB.

Sounds like a very specific use case? I think it is quicky becoming more common as A.) Storage gets cheaper and therefore larger and B.) Microsoft Hyper-V sees more adoption as it is now decently featured and has attractive pricing for people with existing Windows infrastructure. Hopefully this article will help you avoid the trouble I ran into when setting up a new large disk on an Ubuntu Hyper-V VM…
(more…)

Here is the scenario – You are an IT Admin for a business that is large enough or handles data of a particular type such that you have to worry about security more than the average Joe. Furthermore, you get audited from time to time. However, people want an IM (Instant Messenger) solution and… they want to be able to talk to their friends on AIM and ICQ and Yahoo, etc… and management rather than just killing the idea says “Fine… Mrs. IT Person – you go figure it out…”

After a bit of digging via the worlds most useful IT Encyclopedia — GOOGLE — you discover there are a Myriad of option for IM — but the list narrows as you start realizing that most don’t meet the following security and operational requirements:

  • No File Sharing
  • All messages must be audited and stored for XYZ period of time
  • All messages must be encrypted/secure from eavesdropping
  • You users must login using their already corporately managed Microsoft Active Directory Credentials
  • Your users want access to AIM, ICQ, etc… which also must be audited if they are using these accounts from work
  • Your users want access to corporate IM from their mobile device

That is an exhausting list. Luckily, there is one solution out there that is incredibly slick… AND it meets all of these requirements… AND… it just so happens to be COMPLETELY FREE.

Enter OpenFire Chat Server – it is going to make you look like an IT Superhero to your colleagues and to the budgeting department (you, know, if those folks actually pay attention to IT :)… more and more they do these days.) Yes, it runs on Linux. But it is very lightweight, and if you are in a Microsoft environment and have an under worked server with a decent amount of storage and some extra ram (running at least Server 2008 R2), you can convert that machine into a Hyper-V host and build your Chat server in virtual at little or no direct cost. You can also use old or cheap hardware if your organization just isn’t ready to virtualize something. This is worth jumping on the Linux bus for :).

If you still aren’t fully persuaded, OpenFire does have a Windows Distribution now available. Based on the experience I have had in the past with running software developed on Linux, for Linux then ported to Windows… I suggest you stick with Linux. It might be absolutely fine on Windows (I didn’t try it), but my general experience with getting other Linux-ported software to run on Windows has not been pleasant.

(more…)