When your business hinges upon remote workers and remote offices, secure connections, and lots of data manipulation… how do you deal with some of your folks being extra remote? Granted the internet in 2020 is very different from the internet in say 2008 and the world has grown ever smaller as a result… but distance and all of the congested, intervening, network hops that come with it are still a reality. Particularly for remote workers living on other continents that have to interface regularly with systems in the United States as a part of their job.

The two big headaches for remote workers in other countries connecting to offices in the US are latency and bandwidth. In the past there were only and handful of solutions, most involving long-term contracts with a telcom and lengthy, complicated setups. I would argue that MPLS still falls squarely in this camp. SD-WAN has certainly improved on all of the above but it’s still enough of a headache that it typically involves contract terms and conversations with sales “engineers.”

I would like to propose something different using Azure.



A Bit of Groundwork – Let’s Talk About VNET Peering

Stolen from Microsoft documentation, VNET peering in a nutshell “…enables you to seamlessly connect networks in Azure Virtual Network. The virtual networks appear as one for connectivity purposes. The traffic between virtual machines uses the Microsoft backbone infrastructure. Emphasis mine… Traffic between peered Azure VNET’s uses the Microsoft Azure backbone… that lovely, private, low latency and fairly dependable network that Microsoft has built for their cloud.





Bringing Geographically Dispersed Places Close Together Using Peering:

  • Azure VNET’s are tied to an Azure region – create a VNET in Azure EastUS2 and it probably sits in a datacenter in Virginia.
  • Create a VNET in Azure West Central Germany and it sits in a datacenter in Frankfurt.(?)



At this stage, we now have two VNETs physically located on different continents. My bright idea was to peer them and per the definition of peering, all the traffic crossing from resources in one VNET to resources in the other VNET now travels over the lower-latency, higher-bandwidth, private, Azure global backbone. Hopping the distance between continents using WAN is typically painful, this would theoretically alleviate that pain. At the time (circa 2017) I initially had this thought (and I am sure many other people did as well), Microsoft didn’t support it. Now they do and it is called (unsurprisingly), Azure Global VNET Peering.


Using Global VNET Peering to Support Truly Remote Users

In my example, we have two geographically distant VNETs, one in Germany, and one on the US East Coast. We have setup Azure Global VNET Peering and Azure now treats those two networks as a single network for most purposes. A virtual machine in Germany can talk to a Virtual Machine in the USA as if they were sitting on the same local network and the traffic between them crosses a reliable, private, low-latency link. Now, how does this help my German colleague access systems in a physical office in Pennsylvania? It doesn’t yet. We need to add a few additional items.

  • On the EastUS2 VNET we will create an Azure gateway and setup a site-to-site tunnel between that Gateway and an edge networking device in the office in Pennsylvania. This Site-to-Site VPN is going over WAN. However the WAN hop is geographically close (relatively speaking) and latency shouldn’t be too bad.
  • In the West Central Germany VNET, I am going to deploy some form of Client-to-Site VPN service. There are quite a few to choose from, for our example we will use a VM running OpenVPN. Our friend in Germany can how establish a point-to-site VPN connection between her workstation at her house, and the OpenVPN server in our West Central Germany VNET. It’s obvious but I will state it anyhow, a connection from her home to Azure West Central Germany has to go over WAN. That hop however is geographically close and latency shouldn’t be too bad.
  • We then setup appropriate routes, firewall policies, etc. to allow her VPN networked workstation to talk to my office in Pennsylvania.
  • Presto-Chango! No long-term contracts, no hassle with sales engineers, global network accomplished.



WHY?
This solution is perhaps not quite as nice as an MPLS or dedicated link (assuming you have physical corporate footprints in both locations to begin with to facilitate using either of those options). I am honestly not sure how it compares to SD-WAN as far as performance goes… I will say this though, it works, and if you know what you are doing and have a credit card handy (or just… free Azure credits because you are testing things out) you can set it up on your own in under 2 hours.

This became particularly vital when colleagues at my company living on other continents were suddenly forced to start working from home due to a global pandemic. WAN latency between a commercial office in the US and a VNET in a nearby US Azure Region isn’t too shabby. WAN latency between a user at home in Europe and a VNET in a European Azure Region also isn’t all that bad. It’s a heck of a lot better than say… WAN latency from anywhere in Europe to anywhere in the US.

This solution isn’t free, it involves some monthly IaaS costs for VPN infrastructure at your entry point (small VPN solutions typically cost around $75/month for compute/storage in my experience), egress traffic costs coming out of Azure, and possibly charges for a VPN gateway if you are setting up a site-to-site connection on one or both ends. Comparatively speaking – versus MPLS or SD-WAN it is very cheap. Furthermore, if you don’t need it 3-months later, turn it off… this is the cloud, it’s a utility, no need for long-term contracts.

I hope this short post has spurred some creative minds. If you are like me, you find this all fascinating, fun, and in this case it has very practical real-world implications. Have fun building!

1 of 1

This post has no comments. Be the first to leave one!

Join the discussion

Your email address will not be published. Required fields are marked *