I will keep this short and sweet. We have servers in our environment that have multiple IP addresses assigned to a single NIC. That’s normally just fine. However on occasion I will have very strange issues occur where essentially all networking appears to be working and yet web browsing won’t work. I can ping my default gateway, ping other systems in the same subnet, telnet out on port 80 and 443, etc, etc. But the network connectivity still behaves oddly. What’s the issue?
It all has to do with networking logic decisions made many years ago (I believe as far back as Windows Server 2000) by someone at Microsoft. (more…)
Making use of a SAN (storage area network) provides some incredible benefits. I won’t go into depth but at a high-level you often get:
1. Excellent hardware redundancy for data storage, more-so if you are using multiple arrays but even most enterprise single arrays can provide N+1 redundancy. Now we can tolerate power failures, and drive failures, and switch failures, etc…
2. Extra options for historical data integrity/backup/dr – Most enterprise SAN’s support features for volume snapshots and rollbacks. Some even support advanced features specific to protecting MS-SQL and I am sure other database products. Our implementation also provides some great options for DR, like being able to replicate data/volumes from a production SAN over to a different SAN in a different network/datacenter.
3. Administrative ease… managing storage volumes for all of your systems from one interface makes life much easier.
4. Online disk resizing — did your database run out of disk space? You have plenty of space available on your SAN though on which the volume is hosted? No problem, just increase the size of the volume on the SAN (often something you can do while the volume is online and being used) and then increase the partition in windows to take up the new volume space (also an online operation).
For these reasons (and I am sure many many more), SAN’s have become a staple in a lot of enterprise networks. But let me talk about some pain points, particularly in older SAN implementations and particularly around iSCSI and older networks.
Currently I am working on integrating some 10Gbe switches into an existing 1Gbe network. Being completely new to 10 Gbe I wasn’t prepared for the volley of new terms, acronyms, and gotcha’s that were thrown my way.
Initially I had a very hard time finding answers to some basic questions. So I figured I would write a quick post for everyone else having the same struggles… Excuse my laymen explanations and gross oversimplifications which are to follow… To some I may sound the dunce, so be it. I hope this is helpful to all the other dunces :).
On Thursday I released an article detailing how to get Proxmox setup and also how to configure networking with IPv6. However that article got long and I just said I would address the firewall in the future. Well, that’s today because I need to get the configuration stuff written down before I forget. In addition to the firewall there are some other security house keeping items for a new proxmox install, that includes disabling the root account and using sudo and changing the default SSH port. So let’s go.
The base OS under Proxmox is Debian. Debian is great and it is lighter-weight than Ubuntu so I am all for using it.
If you are already somewhat comfortable with Proxmox and Debian configuration and just prefer I get to the point then (more…)