Making use of a SAN (storage area network) provides some incredible benefits. I won’t go into depth but at a high-level you often get:

1. Excellent hardware redundancy for data storage, more-so if you are using multiple arrays but even most enterprise single arrays can provide N+1 redundancy. Now we can tolerate power failures, and drive failures, and switch failures, etc…

2. Extra options for historical data integrity/backup/dr – Most enterprise SAN’s support features for volume snapshots and rollbacks. Some even support advanced features specific to protecting MS-SQL and I am sure other database products. Our implementation also provides some great options for DR, like being able to replicate data/volumes from a production SAN over to a different SAN in a different network/datacenter.

3. Administrative ease… managing storage volumes for all of your systems from one interface makes life much easier.

4. Online disk resizing — did your database run out of disk space? You have plenty of space available on your SAN though on which the volume is hosted? No problem, just increase the size of the volume on the SAN (often something you can do while the volume is online and being used) and then increase the partition in windows to take up the new volume space (also an online operation).

For these reasons (and I am sure many many more), SAN’s have become a staple in a lot of enterprise networks. But let me talk about some pain points, particularly in older SAN implementations and particularly around iSCSI and older networks.

You probably already guessed it… the primary pain point is network speed.

Many millennia ago (at least in the IT accounting of time) – local drive speeds were much slower than they were today. Applications and databases were also much simpler. SCSI and SAS interfaces were slower but more to the point, spinning disk was the only option and it wasn’t nearly as fast as it was today. So max local storage read/write speeds might equate to something like 100 Mb/sec read/write speed depending on the type of workload. Faster speeds were possible by implementing systems like RAID and getting closer to saturating the storage bus and you might see high-speed servers with 200 Mb/sec speeds (2 – 3 Gbit).

These are all rough numbers and wide open to argument 🙂 – just bear with me and follow the logic because in the case of this article the devil is in no way in the details as I am not getting that deep into it.

So along comes iSCSI SAN and it looks great. You have a “cutting edge” 1 Gbit network which can support 100 Mb/s transfer rates. Then you throw in technologies like MPIO and things get even sweeter. A single server can have multiple 1 Gbit coonections to support 100 MB/s “lanes” to/from the storage array. So your SAN backed volumes all operate at what-was-then decent speeds and you get some or all of the benefits listed above.

Fast forward 7 years. You have extended out your existing SAN by adding new arrays to increase storage and redundancy and perhaps improve drive access times… but not upgraded the underlying network. Applications have gotten more complex as have the databases that support them.

Suffice to say, it has been my contention for a while that 1 Gbit speeds are no longer sufficient for a storage interconnect. Case-in-point – I recently migrated a non-crucial internal application database from a 1 Gbit LUN onto a raid-1 local SAS drive. Spinning disk, nothing special. The reason for the move? The storage was available and the application performance was abysmal. After making the move, the performance increase was drastic.

The application in question was particularly database heavy, hence the night-and-day difference…. I think. It was a lesson learned and perhaps a bias confirmed (hence, read this article with a healthy dose of skepticism as I didn’t do any formal testing).

New technologies for SAN have also developed and slick appearing products like “all flash arrays” are now available. But my overall opinion is that SAN’s in their current implementation are largely going away anyhow, making room for (or perhaps giving way to) new technologies like Hyper-Converged clusters, commodity based mix-and-match network distributed storage like Microsoft SoFS, and oddball proprietary hybrid solutions like Dell’s Compellent which offer a lot of versatility (including iSCSI functionality) for the same amount of money. As a side note, I think most CTO’s (and myself when I was first demo’d the product) have an easier time wrapping their head around something like Compellent as it “looks and feels” a lot like a SAN, sans (couldn’t help myself) some of the limitations of most SANs.

The other takeaway from all of this… If you are going to rely on any form of network storage, don’t expect Millennium Falcon speeds at Millennium Falcon prices. “She may not look like much, but she’s got it where it counts kid” doesn’t apply to 1 Gbit networking in any form, even with a “lot of special modifications” like MPIO, link aggregation, and NIC teaming. I expect the same will be said in another 3 – 5 years of 10 Gbit networking when traditional SSD’s are considered “slow” and technologies like Intel’s “3D Xpoint” and NVMe become more standard. We will see TOR (top of rack) storage appliances linked up to 100 Gbit ports which will consist of an array of storage add-in cards and locally even more exotic stuff like HBM for on-processor persistant storage.

1 of 1

This post has no comments. Be the first to leave one!

Join the discussion

Your email address will not be published. Required fields are marked *