After beating my head against the wall over the course of many hours I have finally figured out how to get Proxmox working quite well on my cheap KimSufi server… with IPv6.

The goal of this article is to document (with varying levels of detail) how to go from a fresh KimSufi, OVH, or SoYouStart server to a running standalone Proxmox node with both IPv6 public networking as well as an internal VM/Container network.

Before we get going, here are my server specs if you are looking to get a comparable system…


It is the equivalent of a KS-4C hosted in France:

CPU: Intel(R) Core(TM) i5-3570S CPU @ 3.10GHz – Quad-Core – No HT
RAM: 16 Gb
Bandwidth: 100 mbit – No Transfer Limit
Address Space: 1xIPv4 and an IPv6/128**
Storage: 1 x 2 TB SATA

You can definitely run Proxmox on a lower spec system but I wouldn’t recommend anything less than a Core-i3 w/ 8 GB of ram.

Why is IPv6 such a big deal? Simply put they tell you that you have a /128 (1 address) but in fact you have an entire /64 subnet available to you. That is equal to 18,446,744,073,709,551,616 addresses… With a server as small as what I have got, I will be happy if I use more than 5… Having the extra IP space available means we can host multiple VM’s or containers and they can each have one or more public IP addresses.

Why ProxMox? Simply put, it lets you (somewhat) easily manage Linux Containers (LXC) side by side with full-fledged VM’s running on KVM. That means you can virtualize anything, including Windows, and also opt for Containers when your guest machine is going to be Linux for maximum utilization of your hardware and the best guest performance. All that from one interface.

KimSufi provides Proxmox 4.1 as one of their preinstall options out the gate and it is almost completely up-to-date. From here on out I am going to be discussing how to get going with a fresh install of Proxmox from KimSufi. Let’s go!


Getting Started – Server Housekeeping

1. KimSufi sets up SSH for you from the outset which is a blessing. We need to make some configuration changes immediately, so SSH into your server using putty and login as root. Kimsufi should have provided you credentials via email. Proxmox 4 runs on Debian Jessie so things are a little bit different from Ubuntu Server but not much.

2. Once logged in, the first thing we need to do is update our repositories (assuming you are using ProxMox for free and not paying for a support license). I also wanted to change my root password and do some other housekeeping.

su
passwd root #if you want to change your root password
apt-get install vim (if you want to use VIM to edit files, it is my preferred editor and I will be using it throughout)
mv /etc/apt/sources.list.d/pve-enterprise.list /etc/apt/sources.list.d/pve-enterprise.list.bak
echo "deb http://download.proxmox.com/debian jessie pve-no-subscription" > /etc/apt/sources.list.d/pve-public-repo.list
apt-get update
apt-get dist-upgrade
pveam update

What this did is make sure that when we run apt-get update the system doesn’t try to pull from the enterprise repos which are only available if your box has a paid subscription. It is perfectly legal to use the box however you want without a subscription, the only difference is the community repos are advertised as not being as good as (stable? up-to-date?) the enterprise ones. Additionally, above, I also ran apt-get commands to update the repo info and upgrade they system. If you don’t run a dist-upgrade there is a bug present that won’t allow you to create containers.

Finally, the “pveam update” command just updates your list of downloadable container templates with the latest from Proxmox. This occurs automatically once a day but the command can be used to manually trigger it at any time.


Install OpenVswitch
Next, one of the things I find highly useful is internal networking between guest systems. The easiest way to do this (bonus round section at the end) is with a Proxmox package called openvswitch. It is fully supported but you need to install it. IF you want to do this, I recommend installing it now…

apt-get install openvswitch-switch



Login to your Proxmox Panel
Time to get into the Proxmox GUI. Open a browser and go to:

https://123.456.789.10:8006/

Substitute your public IP in place of the fake one provided above. You can login with the user “root” and whatever password kimsufi provided or whatever you set it to if you changed it earlier.

If you are familiar at all with just about any other virtualization platform, getting your sea legs in Proxmox is pretty easy. Proxmox organizes your install into three levels: the Datacenter (top level), Nodes (second level, server(s) that run VM’s and containers), and the guest systems (VM’s and containers) running on each node (third level). You can navigate to the different tiers using the area on the left. For this tutorial we are dealing with only a single-node setup and I will have us create a single VM.


Rant About Lack of Documentation
Proxmox documentation on the GUI is sparse, documentation on networking and firewall I found to be confusing and poor, and documentation on using IPv6 is the worst of all. A large part of why I am writing this article is to condense all of my research regarding the use of IPv6 with Proxmox into one location. It was a LOT of research work that I don’t want to repeat in the future. To be fair, I don’t take issue with the lack of documentation, after all it IS an enterprise-grade product that is completely and totally FREE.


Getting Started with Proxmox
The first thing you probably want to do is create a container to play with. However before we get ahead of ourselves we need to do all the prep work. That starts with creating a “pool” for the container to sit in…

Create a pool…
1. On the left, click “datacenter” at the top.
2. Click the “Pools” tab
3. Click “Create”
4. Enter a name (ex. defaultpool)
5. Click “create”

Create a storage location for container backups…
1. SSH into your server again then:

su
mkdir /pve
mkdir /pve/backups

2. In the GUI, Go to Datacenter –> Storage Tab –> Add –> Directory
3. ID: Backups, Directory: /pve/backups, Content: select only “VZdump backup file” and deselect everything else, increase “max backups” to like 10 or more depending on how many backups you want to store for each VM/container –> click “add”

Download a container template
In the GUI, on the left, select “local” (which is the default storage object already created on your node) –> content tab –> Templates button –> select a template (I went with “Ubuntu 14.04-standard”) –> click Download

Container templates are the base install template for a new container. You need at least one to deploy a new container.


Configure Networking – Round 1 – About Proxmox Networking
Networking configuration and firewall configuration are DANGEROUS if you are working on a remote server with no direct KVM (Keyboard Video Mouse) way to console into the box. Make the wrong move and you will lose all remote connectivity to the server and have to possibly start all over again after reinstalling the OS (KimSufi offers a rescue boot that could save your tail but you don’t want to go down that route if you can avoid it…). As a somewhat humorous and perhaps sad aside, I had to rebuild my initial box like four times on another host. At least one of those times was related to making a mistake either with the network or the firewall…

Okay, that disclaimer is out of the way…

About Proxmox Networking and Firewall

To state it simply, I found neither the network nor the firewall to be simple. They actually aren’t bad once you figure out the idiosyncrasies but figuring those out is well… bad (see rant about lack of documentation above). I am neither a network admin nor a Cisco guy but I am also not a complete ignoramus as I have worked with enterprise firewalls and switches for over 5 years now. In short I have had to do configuration on all manner of firewalls, security appliances, switches, etc. and I still found this to be a bit of a challenge. On the bright side I sharpened my *nix networking skills substantially.

Anyhow, what you should KNOW is that Proxmox IS NOW your firewall. You will not be configuring a firewall in any of your VMs or containers and doing so within a container could have unexpected consequences. Proxmox is ALSO now your NETWORK configuration tool. Aside from the occasional small change in a guest system, you should not be modifying a container’s network configuration directly.

I will only be talking about setting up networking in this article and will leave the firewall and some other security bits and pieces for the next write-up.

Okay, Proxmox networking… When you install Proxmox it takes over your physical NIC(s) and uses it to create what I will call VNICs. If you have worked in Hyper-V and/or many other virtualization platforms you are familiar with this concept. The default result is a single “bridged” VNIC called either vmbr0 or vmbr1.

Bridged networking keeps things simple in that when you create a container and connect it to your bridged interface it can have an IP address in the same range as the Proxmox host server. On a public server that means you can assign a public IP in the same block as your physical server. This makes life easy because we don’t have to bother with NAT, routing, etc. This makes life difficult because cheap hosts (like KimSufi) only give you one IP address to work with and that is already taken by your host machine.

We are going to start by configuring vmbr0 and because this is cheap hosting and we only have one IPv4 address, we are going to be adding that large IPv6 range for our guest machines to use.

In the Proxmox Web Panel:

1. On the left click the icon for your Proxmox server node (ex. NS32958) –> Network Tab
2. You should see at least two network connections listed, eth0 and vmbr0 (or vmbr1, I can’t remember what the system defaults to). If you system has multiple physical nics you will see additional ethX nics listed. I don’t think Proxmox creates additional vmbrX interfaces for each but I may be wrong.
3. Double-Click on vmbr0
4. You should see an IPv4 address assigned already (your server’s public IP) along with a subnet mask and gateway. This is all for IPv4. We want to add an IPv6 address to this interface as well.


Configure Networking – Round 2 – Add an IPv6 Address

You should still have the dialog box open for vmbr0 and the goal is to enter the correct info into IPv6 Address, Prefix Length, and Gateway (the empty one at the bottom).

If you have a server with KimSufi, login to your KimSufi Dashboard and click “IP”. On this page you should see an IPv6 address that has been assigned to your server. This is actually the first IP in a very large range, all of which you have access to use. For my example, let’s say my IPv6 address is 2001:41d0:1:4462::1. That goes in the IPv6 Address field for vmbr0 in your Proxmox Panel.

Next, the Prefix Length for Kimsufi is “64”

Finally, you need to figure out your gateway which is very easy.

1. Take your IPv6 address – 2001:41d0:1:4462::1
2. Strip off everything from and including the last two characters before ::1 at the end. Which gives you this – 2001:41d0:1:44
3. Add FF:FF:FF:FF:FF (that is FIVE sets of FF) which gives you this – 2001:41d0:1:44FF:FF:FF:FF:FF

Enter your gateway address into the bottom empty Gateway field for vmbr0 in Proxmox. Click “OK” which will close the dialog box. You should see your changes reflected on the network tab for the node. However those changes aren’t active until after a reboot. With the node still selected on the left, click “restart” in the upper-right hand corner to reboot the server.

Configure Networking – Round 3 – Add a Permanent Network Route to Your IPv6 Gateway
Okay, we aren’t quite done yet. Proxmox did most of the work HOWEVER the Gateway IP for KimSufi is in a different subnet from your IPv6 IP address. That means that when packets try to leave your system they go for the gateway but they don’t know how to get there. So we need to tell them how by adding an IPv6 route on your server.

1. SSH into your Proxmox host server again.
2. Now that you are in we are going to write a quick script that will run everytime your vmbr0 network interface becomes active.

su
vim /etc/network/if-up.d/ipv6-gateway

This creates a new blank file to put our script in and opens it in VIM for editing. Here is the script to put in the file (you will need to change the IPv6 gateway address to the one you figured out above):

#!/bin/bash
# Check for specific interface if desired
[ "$IFACE" != "vmbr0" ] || exit 0
# Adding additional routes on connection
ip -6 r a 2001:41d0:1:44FF:FF:FF:FF:FF dev vmbr0
ip -6 r a default via 2001:41d0:1:44FF:FF:FF:FF:FF

UPDATE 3.17.2016I started noticing that my containers using IPv6 would suddenly lose IPv6 connectivity after a few hours. After some digging it became apparent that the default route to the gateway just disappears from the IPv6 routing table. After manually running the following command inside each container “ip -6 r a default via 2001:41d0:1:44FF:FF:FF:FF:FF” connectivity would be restored and it would stay… I am not sure if that will survive a reboot of the container though so I would recommend placing the script above in your containers as well and modify “vmbr0” to “eth0”. That way when you do reboot the system should run those route-add commands from the start. I am not sure without some more testing if this is a good fix or not. The other option might be to set a cron job to just run the script every hour otherwise…

3. Save and close the file. Then do the following to make it executable and finally restart the network stack:

chmod +x /etc/network/if-up.d/ipv6-gateway
/etc/init.d/networking restart

NOTE: You can use ‘ip -6 route show’ to see if the routes were added to the IPv6 routing table.

4. Now you can try to ping your server from your home system to see if it responds. If so, you are golden. ex. ping 2001:41d0:1:4462::1


Configure Networking – Round 4 – Deploy a Container and Configure IPv6 networking

Almost through… By this point, you should have created a resource pool, downloaded a template to work from, and the IPv6 part of your network is now primed for action…

1. In the Proxmox Web Panel, click on “Create CT” in the upper-right-hand corner.
2. Give your container a hostname (ex. mycoolsite.com)
3. Select the resource pool you created earlier (ex. defaultpool)
4. type in a root password for the container, twice.
5. Click next.
6. Select the template from the list (ex. Ubuntu-14.04-standard)
7. Click next.
8. Give it some disk space (20 GB?)
9. Tell it how many cores worth of CPU it gets. The CPU Units field is fine at default. It is completely relative to every other VM/Container you create. A higher CPU Unit means the VM/Container has higher priority than other VM/containers with lower CPU unit value.
10. Click next.
11. Assign it some memory and at least the same amount of SWAP space as memory (2 GB?)
12. Click next
13. Finally the networking tab, I am going to break this into a separate list of steps…

Networking the Container:
1. You can leave the ID as net0,
2. The name eth0 can also be left alone (this is the name of the interface inside of the container/vm)
3. Make sure Bridge is set to vmbr0
4. Leave VLAN TAG empty
5. Leave the firewall unticked (for this article at least)
6. Okay, IPv4 – I am working off the assumption you want to stick with IPv6, so leave IPv4 set to static and the fields for it blank.
7. Leave IPv6 set to static
8. For IPv6/CIDR: you need to use the NEXT IPv6 address in your range. Going off of our IPv6 above, that means I would fill in 2001:41d0:1:4462::2/64 (I could also use, 2001:41d0:1:4462::3/64 or 2001:41d0:1:4462::4/64 etc.)
9. For Gateway (IPv6): you will use the same gateway you figured out above, for our example that is: 2001:41d0:1:44FF:FF:FF:FF:FF
10. Click Next.
11. For DNS domain, you can your server name if your want or something random. (ex. mycoolsite.com)
12. For DNS servers we can use Google’s Public IPv6 DNS server
12a. DNS Server 1: 2001:4860:4860::8888
12b. DNS Server 2: 2001:4860:4860::8844
12c. You can leave the third entry blank, then hit next
13. Confirm everything and then hit “finish” and wait while your container builds. Close the dialog box once the task completes.

Start the container…
1. In the Proxmox webgui, on the left, select the new container that should have popped up once the task completed.
2. In the upper-right-hand corner click “start” to start the container and wait a minute while it fires up.
3. From your home desktop computer, try pinging the IP of your new container and see if it responds. ex. ping 2001:41d0:1:4462::2

If you get a response you have accomplished much!


Configure Networking – Bonus Round – Setup and Internal VM network with OpenVswitch

This is the bonus round. While not required, it is very nice to have an internal private network if you have multiple guest systems. Especially if you want to do interesting things, like running MySQL in one container and Apache in another. Here is how to quickly setup a private network…

Create the network:
1. In the Proxmox WebGUI, Select your node on the left –> Go to the Network Tab –> Create –> OVS Bridge
2. Name: vmbr1, IP address: 192.168.1.100, Subnet Mask: 255.255.255.0
3. Make sure “AutoStart” is ticked and leave everything else blank.
4. Click “create” – The changes should reflect and then you need to click “restart” to restart the node/server.

Connect a container:
1. After the reboot is finished, in the Proxmox WebGUI, select one of your containers on the left.
2. Click “Shutdown” to turn the container off. Wait for it to finish shutting down.
3. With the container still selected, click the “network” tab –>t Add
4. ID: Net1, Name: eth1, Bridge: vmbr1, IPv4/Cidr: 192.168.1.10/24, Gateway (IPv4): LEAVE BLANK
5. Including the IPv4 gateway, leave everything else blank.
6. Start your container.

You can follow the above steps on another container and just increment the IPv4/CIDR address to 192.168.1.11/24 and now these systems can talk to each other on this private network!


Conclusion:

Hopefully this is a huge help to everyone aspiring to use Proxmox and figure out the networking piece, particularly the IPv6 bit of it.

Cheers!

References:
https://known.phyks.me/2014/getting-ipv6-to-work-with-a-kimsufi-server
http://askubuntu.com/questions/339973/set-up-permanent-routing-ubuntu-13-04
http://robert.penz.name/582/ipv6-openvz-ves-and-debianproxmox-as-host-system/

1 of 1

37 comments on: Proxmox – KimSufi, OVH, SoYouStart – IPv6 – Host Multiple Containers and Virtual Machines on a Single KimSufi Server using IPv6 and ProxMox

  1. Pingback: Secure Proxmox Install – Sudo, Firewall with IPv6, and more – How to Configure from Start to Finish « KiloRoot

  2. Gerardo
    Reply

    Great tutorial!!

    I did the entire tutorial but my lxc containers cannot connect to the internet and cannot be pinged from outside. Containers can ping each other and the host machine and host machine can ping containers.

    Host machine can access the internet via ipv6

    If I run ip -6 neigh on a container I get a FAILED in the ipv6 gateway. I don’t know whats wrong.

    Help will be fully appreciated.

    Thanks!

    • nbeam
      Reply

      Are you using any kind of firewall? Are you using IPv4 in addition to IPv6 on the same interface for your container?

      • Gerardo

        Thank your for answering. I have not set any kind of firewall. I started from a fresh installation of proxmox4 from Kimsufi Panel and then followed step by step your tutorial. My interface vmbr0 has both IPV4 and IPV6. the containers interface only have IPV6

        This is what i get if I run ip -6 neigh in a container

        fe80::225:90ff:fe31:961e dev eth0 lladdr 00:25:90:31:96:1e REACHABLE
        fe80::205:73ff:fea0:1 dev eth0 lladdr 00:05:73:a0:00:01 router STALE
        2001:41d0:1:feXX::1 dev eth0 lladdr 00:25:90:31:96:1e REACHABLE
        fe80::2ff:ffff:feff:fffe dev eth0 FAILED
        2001:41d0:1:feff:ff:ff:ff:ff dev eth0 FAILED

        The host node is reachable but the gateway failed.

        This is what I get if i run ip-6 neigh in the host

        2001:41d0:1:feXX::2 dev vmbr0 lladdr 32:62:63:37:65:35 STALE
        2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 lladdr 00:05:73:a0:00:01 router STALE
        fe80::205:73ff:fea0:1 dev vmbr0 lladdr 00:05:73:a0:00:01 router STALE
        fe80::2ff:ffff:feff:fffd dev vmbr0 lladdr 00:ff:ff:ff:ff:fd router STALE
        fe80::2ff:ffff:feff:fffe dev vmbr0 lladdr 00:ff:ff:ff:ff:fe router REACHABLE
        fe80::3062:63ff:fe37:6535 dev vmbr0 lladdr 32:62:63:37:65:35 STALE

        And for ip -6 route

        2001:41d0:1:feXX::/64 dev vmbr0 proto kernel metric 256 pref medium
        2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 metric 1024 pref medium
        fe80::/64 dev vmbr1 proto kernel metric 256 pref medium
        fe80::/64 dev vmbr0 proto kernel metric 256 pref medium
        fe80::/64 dev veth101i0 proto kernel metric 256 pref medium
        default via 2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 metric 1024 pref medium

        All seems correct to me. If I do a tcpdump it seems that neighbouring packets from guest are not correctly handled

        10:55:41.011224 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:42.011217 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:43.029507 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:44.027224 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:45.027220 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:46.045466 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:47.043222 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:48.043217 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:48.046663 IP6 fe80::2ff:ffff:feff:fffe > fe80::225:90ff:fe31:961e: ICMP6, neighbor solicitation, who has fe80::225:90ff:fe31:961e, length 32
        10:55:48.046692 IP6 fe80::225:90ff:fe31:961e > fe80::2ff:ffff:feff:fffe: ICMP6, neighbor advertisement, tgt is fe80::225:90ff:fe31:961e, length 24
        10:55:49.061555 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32
        10:55:50.059265 IP6 2001:41d0:1:fed2::2 > ff02::1:ffff:ff: ICMP6, neighbor solicitation, who has rbx-1-6k.fr.eu, length 32

        Help would be fully appreciated. Thank you very much

      • nbeam

        Let me try to run some commands on my box and compare. Here is what I keep running into… (since writing this article a few days ago) My Node’s IPv6 address pretty much always works. My container’s IPv6 however will work after a reboot but after a few hours it eventually stops working. I am not sure why or if the issue is related to what you are dealing with. The IPv6 on the Node seems pretty stable but for some reason the containers have been finicky. I have trouble getting the IPv6 to come up on them sometimes as well. All-in-All it doesn’t give me warm fuzzy feelings about the prospect of hosting any kind of application or site on IPv6 from a container. I will keep digging though and see what I can come back with. I have a feeling our issues are related.

      • nbeam

        Gerardo, can you get on your node and run “cat /etc/network/interfaces” and post the output here? I would be curious to see what you get. I have two servers with KimSufi both running proxmox and one of them was a royal mess in the network config (from proxmox autoconfigure I think). I cleaned it up significantly. I will know later today if this helped with my containers staying IPv6 networked.

  3. nbeam
    Reply

    Update: So that didn’t fix my issue either however I figured out how to get my IPv6 working again without a restart of the container. For some reason (I am still trying to figure out why) the default route that is set for IPv6 to the gateway disappears after the network/container has been up for a while. I was able to manually/forcibly add the route back in my box. I am wondering for you if it simply never got added in your container. Inside your container, try a variation of this command as root (you will need to modify with your gateway address you figured out above:

    ip -6 r a 2001:41d0:e:11FF:FF:FF:FF:FF dev eth0

    That might come back and say something like “file exists” –fine Then run:

    ip -6 r a default via 2001:41d0:e:11FF:FF:FF:FF:FF

    See if the system just takes it or barks out something else. After I ran that command the networking for my container started working again. I would like to figure out why the route disappears on its own over time though.

    • Gerardo
      Reply

      This is what I got when I run ip -6 route in my container

      2001:41d0:1:fed2::/64 dev eth0 proto kernel metric 256
      2001:41d0:1:feff:ff:ff:ff:ff dev eth0 metric 1024
      2001:41d0:1:fe00::/56 dev eth0 proto kernel metric 256 expires 2591961sec
      fe80::/64 dev eth0 proto kernel metric 256
      default via 2001:41d0:1:feff:ff:ff:ff:ff dev eth0 metric 1024
      default via fe80::205:73ff:fea0:1 dev eth0 proto ra metric 1024 expires 1761sec hoplimit 64

      I tried the two commands and the two of them say “file already exists”. The container cant see the outside world. It can ping other containers and the host as I stated before. Nothing changed.

      This is /etc/network/interfaces of the container

      auto lo
      iface lo inet loopback

      auto eth0
      iface eth0 inet6 static
      address 2001:41d0:1:fed2::2
      netmask 64
      post-up ip route add 2001:41d0:1:feFF:FF:FF:FF:FF dev eth0
      post-up ip route add default via 2001:41d0:1:feFF:FF:FF:FF:FF
      pre-down ip route del default via 2001:41d0:1:feFF:FF:FF:FF:FF
      pre-down ip route del 2001:41d0:1:feFF:FF:FF:FF:FF dev eth0

      and then this is the /etc/network/interfaces of the host

      auto lo
      iface lo inet loopback

      iface eth0 inet manual

      iface eth1 inet manual

      auto vmbr1
      iface vmbr1 inet manual
      bridge_ports dummy0
      bridge_stp off
      bridge_fd 0
      post-up /etc/pve/kvm-networking.sh

      auto vmbr0
      iface vmbr0 inet static
      address 91.121.209.210
      netmask 255.255.255.0
      gateway 91.121.209.254
      broadcast 91.121.209.255
      bridge_ports eth0
      bridge_stp off
      bridge_fd 0
      network 91.121.209.0

      iface vmbr0 inet6 static
      address 2001:41d0:1:fed2::1
      netmask 64
      gateway 2001:41d0:1:feFF:FF:FF:FF:FF
      post-up /sbin/ip -f inet6 route add 2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0
      post-up /sbin/ip -f inet6 route add default via 2001:41d0:1:feff:ff:ff:ff:ff
      pre-down /sbin/ip -f inet6 route del default via 2001:41d0:1:feff:ff:ff:ff:ff
      pre-down /sbin/ip -f inet6 route del 2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0

      I added the gateway for iface vmbr0 inet6 because it was not added by default installation. Everything else was there, including the post-up rules that you say to write in a script in the tutorial. I also tried deleting the rules from interfaces and putting the scripts with same results

      Thank you for your help

    • Gerardo
      Reply

      UPDATE I also tried adding more Ips from the range IP to the host with ip -6 address add and I can ping those IP from the outside, so I assumed is a problem with the container networking. I thought at first that Kimsufi was MAC filtering the network, but now it is discarded as you confirmed me that you can use IPV6 on your containers

      Thanks for your help.

      • nbeam

        So my home workstation has been pinging my container IPv6 for like 9 hours now. So manually adding the route to the gateway seems to have fixed the issues I was having with my container dropping it’s connection. I was thinking Kimsufi was MAC filtering as well but since simply adding the route in fixed it that makes me think not.

        On your container interfaces config file I was comparing it to my (working) container interface file. I noticed that you don’t have an explicitely spelled out loopback interface for IPv6. Let’s try that…

        Underneath, “iface lo inet loopback”

        add:

        iface lo inet6 loopback

        Save, close, restart networking or ifdown && ifup the interface.

        I also noticed something else. You have one additional line in your ip -6 route output compared to my working container ip -6 route output…. and, surprise, it is a second “default via” route… It looks like a gateway for an internal IPv6 network (I think that is what fe80 networks are, you will have to forgive me but I really suck with IPv6…)

        You could try this:

        ip -6 route del default via fe80::205:73ff:fea0:1

        I think that command should remove the additional route.

        I would bet that is probably the whole of your issue (fingers crossed) because whenever you have multiple default gateways on a system problems can ensue and I am fairly certain that is what “default via” rules pretty much do.

        ——

        If that all fails…

        For further diagnosing can you try to ping a few things from INSIDE your container?

        First trying to ping yourself:

        ping6 2001:41d0:1:fed2::2

        Then try to ping your HOST machine:

        ping6 2001:41d0:1:fed2::1

        Finally, try to ping your gateway:

        ping6 2001:41d0:1:feFF:FF:FF:FF:FF

        Tell me what comes back from that. Because the kicker is, if you can ping yourself then IPv6 is working, if you can ping the host then general local IPv6 networking is working, finally, if you can ping the gateway then that means you can “knock on the door” heading out of the network and most likely do have a routing issue of some kind.

        Because your container has no internet you might not be able to install the traceroute utility (it might be preinstalled depending on your distribution, it isn’t on the Ubuntu container distro I am using for my container).

        But you could try to:

        traceroute 2a00:1450:4007:80b::2004

        Which is Google.com. My first hop when doing so is the gateway.

        Man, good luck with it and tell me how it goes.

      • nbeam

        “Link-local addresses for IPv4 are defined in the address block 169.254.0.0/16, in CIDR notation. In IPv6, they are assigned with the FE80::/64 prefix. ”
        ——–
        Thanks wikipedia… That confirms for me that, yes, you have a second default route going to a link local IPv6 address in your container routing table. I think that might be part of the issue.

      • nbeam

        Because I can’t leave this issue alone… I noticed your ip -6 route output that the other default gateway entry you have said “proto ra” and also had what appeared to be an expiration attached to it. RA = Router Advertisement – this gateway in your routing table came from somewhere else. I am wonder if RA’s are what also periodically were knocking my container offline . I will keep digging but I am wondering if Proxmox is sending out some kind of RA packet to the containers…

      • nbeam

        From my previous comment about “Router Advertisements” I discovered a setting in the firewall (which I realize you aren’t using) that says “allow” NDP — which is https://en.wikipedia.org/wiki/Neighbor_Discovery_Protocol – That article sums in up. Router Advertisements are part of the Neighbor Discovery Protocol. The question is, what is sending the router advertisement package to your container (and possibly my container)? I could just turn off NDP in my containers firewall but I would rather know where the traffic is coming from first.

      • nbeam

        Also, I scripted out my post-up rules because, for whatever reason, the post-up rules didn’t seem to be adding the route to the gateway when they were just present in the interfaces file. I don’t know why though. I figured the worst that could happen if scripted is the system would just say they were already there and ignore them 🙂

      • nbeam

        I am starting to get stumped. My systcl.conf is clean (no options enabled, everything is commented out with #). The fact that you can ping the host machine and get a response means IPv6 is definitely enabled and working on your container. It is odd you can’t ping your gateway though.

        Could you trying spinning up a completely fresh Ubuntu 14.04 container and configuring IPv6 on it (if that isn’t what you are already using). I am trying to get rid of any other outside differences between our setups.

      • nbeam

        One difference I can think of between us is that I am using a firewall and something I ran across when setting that up was a specific requirement for an IPv6 loopback “lo” entry on the host. I don’t know if that makes any difference or not.

      • Gerardo

        I also notices that a link address is reachable from the container that has the same MAC as the gateway, but it is a local ipv6 link:

        on container
        2001:41d0:1:feff:ff:ff:ff:ff dev eth0 FAILED
        fe80::205:73ff:fea0:1 dev eth0 lladdr 00:05:73:a0:00:01 router STALE

        on host
        2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 lladdr 00:05:73:a0:00:01 router REACHABLE
        fe80::205:73ff:fea0:1 dev vmbr0 lladdr 00:05:73:a0:00:01 router STALE

        I know little about ipv6 and dont know why this happens. I also tried to assign a ip -6 neigh add 2001:41d0:1:feff:ff:ff:ff:ff lladdr 00:05:73:a0:00:01 without success.

        Which firewall are you using? I’ll give it a go and see what happens. Everything else I tried has failed. I also added a ipv6 loopback, comment out entirely sysctl.conf and created a new fresh container (i was doing that from the beginnig).

        May be the firewall is the key.

      • Gerardo

        I also notices that a link address is reachable from the container that has the same MAC as the gateway, but it is a local ipv6 link:
        —
        on container
        2001:41d0:1:feff:ff:ff:ff:ff dev eth0 FAILED
        fe80::205:73ff:fea0:1 dev eth0 lladdr 00:05:73:a0:00:01 router STALE
        —
        on host
        2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 lladdr 00:05:73:a0:00:01 router REACHABLE
        fe80::205:73ff:fea0:1 dev vmbr0 lladdr 00:05:73:a0:00:01 router STALE

        I know little about ipv6 and dont know why this happens. I also tried to assign a ip -6 neigh add 2001:41d0:1:feff:ff:ff:ff:ff lladdr 00:05:73:a0:00:01 without success.

        Which firewall are you using? I’ll give it a go and see what happens. Everything else I tried has failed. I also added a ipv6 loopback, comment out entirely sysctl.conf and created a new fresh container (i was doing that from the beginnig).

        May be the firewall is the key.

  4. Gerardo
    Reply

    Hello!!
    ——
    This are the results
    ping6 2001:41d0:1:fed2::2
    —-
    64 bytes from 2001:41d0:1:fed2::2: icmp_seq=1 ttl=64 time=0.014 ms
    —-
    ping6 2001:41d0:1:fed2::2
    —-
    64 bytes from 2001:41d0:1:fed2::1: icmp_seq=1 ttl=64 time=0.108 ms
    —-
    ping6 2001:41d0:1:feFF:FF:FF:FF:FF
    —-
    From 2001:41d0:1:fed2::2 icmp_seq=1 Destination unreachable: Address unreachable
    —-
    I have traceroute installed on my container (it comes installed with the template I downloaded from templates in the proxmox configuratiron web). However, when I run it i have no successfull results
    —-
    traceroute 2a00:1450:4007:80b::2004
    —-
    80 byte packets
    1 ispconfig.ip-91-121-209.eu (2001:41d0:1:fed2::2) 2999.895 ms !H 2999.882 ms !H 2999.881 ms !H

    I added the lo inet6 interfaces. I also found up that the second default rule is not there from the beginning. It appears when the container is running for some minutes. I think when the container uses network to communicate with the host, that rule is received from somewhere.

    I also tried deleting it and pinging the gateway just in the moment the container starts without results.

    I don’t know what might be causing this problem. Maybe sysctl configuration. Do you have ipv6 forwarding, routing, proxy or anything else enabled in systcl.conf? I have none of them enabled as it is how it came by default, but I will try enabling some of them to try. It is suppossed they are not needed to be enabled, but I will give them a go.

    Thanks for your help.

  5. Gerardo
    Reply

    UPDATE – I tried changing all the values as proxy_ndp forwarding ra_accept i found in sysctl.com without success.

    This is my sysctl.conf in the host (I have reverted all the modificacions I tried and omitted the rest because it is all commented out)

    # Disable IPv6 autoconf
    net.ipv6.conf.all.autoconf = 0
    net.ipv6.conf.default.autoconf = 0
    net.ipv6.conf.vmbr0.autoconf = 0
    net.ipv6.conf.all.accept_ra = 0
    net.ipv6.conf.default.accept_ra = 0
    net.ipv6.conf.vmbr0.accept_ra = 0

    My systcl.conf in the container is entirely commented out.
    —-
    Any differences?
    Thanks for help.

    • nbeam
      Reply

      In my ubuntu 14.04 container my sysctl files are all completely unmodified from how they came.

      Regarding the firewall. You absolutely must not use any kind of firewall that is configured inside the container (no UFW/iptables) or directly configured on the node/host.

      Proxmox has a built-in firewall (that basically manipulates IPtables if I understand correctly) and that is what you should be using. It is completely configurable via the Proxmox Web Panel. There are multiple “tiers” and I have found it to be a bit quirky all around. I wrote an article on it here: http://www.kiloroot.com/secure-proxmox-install-sudo-firewall-with-ipv6-and-more-how-to-configure-from-start-to-finish/

      Here is output from a few commands, not sure if this will be helpful. These were both run from my container:

      ip -6 neigh
      2001:XXXX:X:XXff:ff:ff:ff:ff dev eth0 lladdr 00:05:73:a0:00:00 router REACHABLE
      fe80::1ee6:c7ff:fe52:740 dev eth0 lladdr 1c:e6:c7:52:07:40 router REACHABLE
      2001:XXXX:X:XXXX::1 dev eth0 lladdr ec:a8:6b:f1:af:72 STALE
      fe80::eea8:6bff:fef1:af72 dev eth0 lladdr ec:a8:6b:f1:af:72 STALE
      fe80::12bd:18ff:fee5:ff80 dev eth0 lladdr 10:bd:18:e5:ff:80 router STALE

      ping6 2001:XXXX:X:XXXX::1
      PING 2001:XXXX:X:XXXX::1(2001:41d0:e:11ea::1) 56 data bytes
      64 bytes from 2001:XXXX:X:XXXX::1: icmp_seq=1 ttl=64 time=0.038 ms

      The route to my host/node was marked as “stale” however I was able to ping it. The route to my gateway was marked “reachable”.

      I have two servers with KimSufi, the firewall is active on both. I recently clustered them using a server-to-server VPN connection for the cluster communications which may further complicate things when it comes to comparison.

  6. Gerardo
    Reply

    I also notices that a link address is reachable from the container that has the same MAC as the gateway, but it is a local ipv6 link:
    —
    on container
    2001:41d0:1:feff:ff:ff:ff:ff dev eth0 FAILED
    fe80::205:73ff:fea0:1 dev eth0 lladdr 00:05:73:a0:00:01 router STALE
    —
    on host
    2001:41d0:1:feff:ff:ff:ff:ff dev vmbr0 lladdr 00:05:73:a0:00:01 router REACHABLE
    fe80::205:73ff:fea0:1 dev vmbr0 lladdr 00:05:73:a0:00:01 router STALE

    I know little about ipv6 and dont know why this happens. I also tried to assign a ip -6 neigh add 2001:41d0:1:feff:ff:ff:ff:ff lladdr 00:05:73:a0:00:01 without success.

    Which firewall are you using? I’ll give it a go and see what happens. Everything else I tried has failed. I also added a ipv6 loopback, comment out entirely sysctl.conf and created a new fresh container (i was doing that from the beginnig).

    May be the firewall is the key.

  7. Gerardo
    Reply

    I’ve tried everything and didn’t manage to get it working.

    My final shot was to install proxmox 3.4 and give it a go and incredibly, it works!!

    Now I know the problem is somewhere in the bridge/routing configuration but i cannot guess what it is. I’ll keep trying.

    Thanks for helping

    • nbeam
      Reply

      That’s excellent news. If you ever get to the root cause please post an update as I would be curious to know. I am on Proxmox 4.1 and things are working fairly well. Ever since I manually set the default route on my container the IPv6 networking hasn’t gone down. It has been working for several days now.

      • Gerardo

        Hi!!

        I finally discovered the problem. I got down to how proxmox 3.4 manages OpenVZ networking and found that it creates (internally as you are not shown that in the interface) a new network adapter. Then it’s add your second address to your primary adapter and establish SNAT and DNAT routes with the container.

        Proxmox4 don’t do that anymore. When you create a LXC container. It directy bridge it with your network interface. That’s ok, but Kimsufi (at least with my server) have restricted the MAC Learning capability of their switches or routers to only one MAC, so all the packets from any other MACs are automatically dropped.

        Found that up, I reinstalled Proxmox4, create a new bridge interface and assigned it an private IPV6 and put there a container.

        Then I set SNAT and DNAT routes for it and added the public IPv6 I want to use to the main interface. And… it worked!!

        So thats finally solved my problem!! Hope it helps to anybody else that face the same problem!!

        And, thanks for our your helping!!

    • Shigawire
      Reply

      Hi Gerardo! It would be really awesome if you could give us some screenshots or some walkthrough how you solved the problem!

      I’m stuck as well, and I honestly have no idea how the blog author managed to get it working on Proxmox 4 with this guide.

      Cheers!

      • nbeam

        Hey Shigawire,

        At what point are you getting stuck?

        Have you been able to get your IPv6 gateway figured out and apply an IPv6 public IP to your Proxmox host server? Once applied, are you able to ping it from a remote host (the computer sitting in your home/office for example)?

        That should be the easy part (getting a singe IPv6 address assigned). The difficult part is getting the gateway to work from/to your other proxmox containers.

        I actually ended up dropping IPv6 for my uses and just went back to using NAT with IPv4 to give my containers a shared public IP. Which isn’t great for hosting I realize. I got rid of it for a few different reasons, one being that I found the usefulness of IPv6 to be limited because many clients across the web still can’t hit sites/applications/services that are hosted purely on IPv6. Also a lot of services and applications still don’t work correctly with IPv6.

        That all being said, I am happy to try to give some feedback towards the point at which you are getting stuck.

    • Chen
      Reply

      hi,Gerardo,my friend,Please tell me the tutorial.

  8. thedanks
    Reply

    hey i am having the exact same problem as gerardo… Can’t ping the vm’s from the outside nor can they get out to the internet. Can you clarify a little what you did to fix the vms?

    • thedanks
      Reply

      some screenshots would be ridiculously helpful 🙂

      • galgamoGerardo

        Hi!! For unknown means, I didn’t see receive an e-mail letting me know about this posting.

        Of course. I’ll send you all the steps I’ve done to configure the box, providing that after 2 month you haven’t solved it by yourself.

        Where should I send the screenshot to you?

  9. luisv
    Reply

    Hello Gerardo, I will be rent a proxmox server in OVH, can you tell me how do you establish SNAT and DNAT routes with the container? Thanks!

    • nbeam
      Reply

      Hey Luisv – it is part of the IPtables rules on the physical proxmox host machine. Generally you would declare/set SNAT/DNAT rules when the physical host brings your virtual interface up (basically it runs the commands to set the rules when the interface is enabled and removes them when the interface is disabled). I use a unique interface per container. So an example from my hosts file would look something like this.

      auto vmbr5
      iface vmbr5 inet static
      address 10.220.220.254
      netmask 255.255.255.0
      bridge_ports none
      bridge_stp off
      bridge_fd 0
      post-up echo 1 > /proc/sys/net/ipv4/ip_forward

      post-up iptables -t nat -A POSTROUTING -s ‘10.220.220.0/24’ -o vmbr0 -j MASQUERADE
      post-down iptables -t nat -D POSTROUTING -s ‘10.220.220.0/24’ -o vmbr0 -j MASQUERADE

      post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp –dport 45220 -j DNAT –to 10.220.220.220:5901
      post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp –dport 45220 -j DNAT –to 10.220.220.220:5901

      post-up iptables -t nat -A PREROUTING -i vmbr0 -p tcp –dport 45022 -j DNAT –to 10.220.220.220:22
      post-down iptables -t nat -D PREROUTING -i vmbr0 -p tcp –dport 45022 -j DNAT –to 10.220.220.220:22

      There are three sets of commands up there and a standalone command at the top. The standalone command enables forwarding on the host.

      The other sets do a couple of different things…
      The first set, which you will see the word “PREROUTING” says that any outbound connection coming from the container attached to this interface gets NAT’d out on the public IP attached to interface vmbr0. This means traffic from an internal container with an internal IP of 10.220.220.10 or example, would appear to the world like it is coming from the container’s public IP address.

      The second and third rules or inbound NAT. The basically allow me to map a port on the public IP of the host to a port on the internal IP address of my container. So in the very bottom set I am mapping the SSH port. So if I connect to my container’s public IP (example, 174.294.29.2) on port 45022 – the host machine will direct that traffic to internal IP address of 10.220.220.10 and port 22.

      I hope this helps some. Figuring out IPtables can be a lot of headache as it is a steep learning curve.

      post-up = run this command when the interface is brought up
      post-down = run this command when the interface is taken down

      This entry would exist in my hosts /etc/network/interfaces file.

      • luisv

        Thanks nbeam! I will be apply this!

  10. Besfort
    Reply

    Hey man,

    Thanks for the detailed instructions. I have followed it step by step, but my circumstances are a bit different.
    I am trying to do this on a Online.net dedibox, where the only difference is that they give IPv6 /56 blocks instead of /64.

    But other than that, I see no difference in the surface, although I think there may be some.
    My problem is that I can not manage to ping anything outside the host, even from the host, when using IPv6 (ping6) even though I manage to setup the DHCPv6 and IPv6 gateway.

    Any chance you wanna try a new experience, and try to setup ipv6 on an online,net dedibox, and with that, fix my problem and stop my 3 day long headache 😛 ?

    Thanks,
    Besfort

  11. Dailen
    Reply

    The only part it feels like this is missing is how to get my KVM exposed to the outside via the vswitch. For example if I just wanted to expose port 80, 443 via a device connected to the vswitch, how would I forward that info?

  12. Dani_L
    Reply

    Hey, i did everything like the Tutorial with my IPv6 Address.. But i cant Ping the Server.
    Why ?
    Greetings

Join the discussion

Your email address will not be published. Required fields are marked *