Menu
Index

Contact
LinkedIn
GitHub
Atom Feed
Comments Atom Feed



Tweet

Similar Articles

31/10/2011 17:27
ICMPv6 Firewalling Quick Reference (crib sheet)
08/02/2011 18:37
IPv6 NAT .... or not as the case may be
28/12/2015 14:03
Home Lab Project: Network Bridges for KVM II - Flexible Bridges & VLANs

Recent Articles

23/04/2017 14:21
Raspberry Pi SD Card Test
07/04/2017 10:54
DNS Firewall (blackhole malicious, like Pi-hole) with bind9
28/03/2017 13:07
Kubernetes to learn Part 4
23/03/2017 16:09
Kubernetes to learn Part 3
21/03/2017 15:18
Kubernetes to learn Part 2

Glen Pitt-Pladdy :: Blog

Home Lab Project: Network Bridges for KVM - NAT, Host-only, Isolated

With building a Home Lab system for experimenting and skilling-up on new technologies, one area there is limited comprehensive examples are is all the different types of networks you're likely to want for Lab Hypervisors.

Among my scenarios is:

  • Standard Bridged Networks to an Ethernet device ... that's easy!
  • Isolated Bridge with no routing or addresses (including IPv6 Link Layer)
  • Host-Only Bridge where there is no routing other than to the host
  • NAT Bridges where traffic can be routed with NAT via the host
  • Networks connected over VLANs between hosts

One requirement I have is IPv6 must be fully working in all cases, including having NAT on NAT Bridges. This is important since as I described before, when simulating a real-world deployment and possibly also mixing Hardware devices, being able to NAT IPv6 to test real IPs is useful.

In my case I'm using Debian Jessie for the host, but the concepts are transferable to other distros.

For this all configuration is done in /etc/network/interfaces unless specified otherwise.

Also note that the names of bridges are arbitrary. They could be "br0" or "FunkyNet", the OS really doesn't care.

Packages Required

You should have the bridge-utils package installed for this which will give the necessary tools for administrating and debugging Linux network bridges.

Additionally, if you have more than one Hypervisor then you probably want to start connecting bridges up between Hyps. If that's the case then we will be using 802.1Q VLANs to do this and you will need the vlan package as well.

Basic Bridge to Ethernet Device

This is the standard bridge that everyone knows and is very widely used and documented. I shouldn't need to put this here, but it's useful for completeness.

auto br0
iface br0 inet static
        bridge_fd 0
        bridge_waitport 0
        address xxx.yyy.zzz.2
        netmask 255.255.255.0
        network xxx.yyy.zzz.0
        broadcast xxx.yyy.zzz.255
        gateway xxx.yyy.zzz.1
        # dns-* options are implemented by the resolvconf package, if installed
        dns-nameservers aaa.bbb.ccc.ddd
        dns-search example.com
        bridge_ports eth0
        up: ifconfig eth0 mtu 9000
iface br0 inet6 static
        address XXXX:YYYY:ZZZZ::2
        gateway XXXX:YYYY:ZZZZ::1
        netmask 64

Devices using this Bridge are on the same network as the port device (eth0 in this example).

Isolated Network Bridge (no contact with anything)

This is the opposite extreme from the basic bridge and here nothing (not even the host) can communicate to this bridge.

auto visol_br0
iface visol_br0 inet manual
        bridge_fd 0
        bridge_waitport 0
        bridge_ports none
        # stop forwarding on this interface
        up sysctl net.ipv4.conf.${IFACE}.forwarding=0
iface visol_br0 inet6 manual
        up sysctl net.ipv6.conf.${IFACE}.disable_ipv6=1
        # stop forwarding on this interface
        up sysctl net.ipv6.conf.${IFACE}.forwarding=0

This is useful where you have another VM doing routing between bridges, networks between application tiers and such like. Another possible scenario is to isolate something nasty (eg. you may be testing security software and don't want any test specimens leaking).

Host-Only Network Bridge (no routing)

In this case devices on the bridge are able to communicate with the host, but no further. This can be useful where the host provides some services into the network (eg. DHCP) but you want to minimise the risk of anything leaking as in the Isolated Bridge.

auto vhost_br0
iface vhost_br0 inet static
        bridge_fd 0
        bridge_waitport 0
        bridge_ports none
        address xxx.yyy.zzz.1
        netmask 255.255.255.0
        network xxx.yyy.zzz.0
        broadcast xxx.yyy.zzz.255
        # stop forwarding on this interface
        up sysctl net.ipv4.conf.${IFACE}.forwarding=0
iface vhost_br0 inet6 static
        # fc00::/7 private network
        address fc00:XXXX:YYYY:ZZZZ::1
        netmask 64
        # stop forwarding on this interface
        up sysctl net.ipv6.conf.${IFACE}.forwarding=0

A special scenario here would be if you wanted to expose a small number of services to this network from outside without allowing general forwarding. A simple way of doing this is with rinetd which acts as a port-forwarder. As an example, I want to provide access to my apt-cacher-ng server to isolated hosts and all I need to add to /etc/rinetd.conf is:

# bindadress    bindport  connectaddress  connectport
HostAddress     3142    CacheAddress    3142

Now the host is re-serving apt-cacher-ng on the Host-Only network.

NAT Network Host (NAT via host)

This is very similar to Host-Only but we do allow forwarding (routing) of traffic.

auto vnat_br0
iface vnat_br0 inet static
        bridge_fd 0
        bridge_waitport 0
        bridge_ports none
        address xxx.yyy.zzz.1
        netmask 255.255.255.0
        network xxx.yyy.zzz.0
        broadcast xxx.yyy.zzz.255
iface vnat_br0 inet6 static
        # fc00::/7 private network
        address fc00:XXXX:YYYY:ZZZZ::1
        netmask 64

In order to provide NAT to addresses in this network you will need some iptables/ip6tables rules to translate addresses to that of the host. Fortunately in newer versions of Linux you can NAT IPv6 for those special conditions (don't do this normally!) where it is useful:

iptables --append FORWARD --in-interface $srcbridge --out-interface $dstbridge --proto tcp --source $srcipv4 --jump ACCEPT
ip6tables --append FORWARD --in-interface $srcbridge --out-interface $dstbridge --proto tcp --source $srcipv6 --jump ACCEPT
iptables --table nat --append POSTROUTING --out-interface $dstdevice --source $srcipv4 --jump MASQUERADE
ip6tables --table nat --append POSTROUTING --out-interface $dstdevice --source $srcipv6 --jump MASQUERADE

Obviously you will also need to have suitable flushing of tables and all the normal iptables stuff before hand. I've also taken the precaution of REJECTing port 25 (SMTP) traffic to avoid tests involving mail getting out of the sandbox. Another option would be to forward port 25 traffic to a mailserver (VM?) configured to accept all mail to all domains and put it all in one mailbox which can be useful for verifying mail sending for applications.

Bridge Filtering and sysctl

One thing to watch out for is that it is possible to run Netfilter on traffic going through bridges. This could result in some unexpected behaviour where some things work and others don't and waste a load of time trying to figure out what is happening when some traffic goes across the bridge and others don't.

The key file is /proc/sys/net/bridge/bridge-nf-call-iptables which if set to "1" passes traffic through Netfilter. To avoid these problems I have a /etc/sysctl.d/99-local-libvirt.conf file containing:

net.bridge.bridge-nf-call-iptables=0
net.bridge.bridge-nf-call-ip6tables=0
net.bridge.bridge-nf-call-arptables=0

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Joining up Hyps with VLANs

This will require a network switch that will pass the VLAN tags through. Unmanaged switches can be a bit variable with this - I've heard of them stripping VLAN tags, dropping tagged frames, having MTU problems with (larger) tagged frames, and some just let the frames straight through which is the one scenario that would actually work for this. I don't take chances with this and since very cheap "Smart" (not very, but they understand 802.1Q) switches are now available for marginally more than unmanaged switches, that's the solution I use.

To join bridges using VLANs you just need to add one extra line to the config, in this case the example is for the isolated network to eth1 VLAN ID 632:

auto visol_br0
iface visol_br0 inet manual
        bridge_fd 0
        bridge_waitport 0
        bridge_ports none
        bridge_ports eth1.632
        # stop forwarding on this interface
        up sysctl net.ipv4.conf.${IFACE}.forwarding=0
iface visol_br0 inet6 manual
        up sysctl net.ipv6.conf.${IFACE}.disable_ipv6=1
        # stop forwarding on this interface
        up sysctl net.ipv6.conf.${IFACE}.forwarding=0

You only need to add the bridge_ports line for the IPv4 part of the config and it will pass both IPv4 and IPv6. Depending on your switch, you may also have to manually configure it to pass the VLAN between the relevant ports that your Hyps are connected to.

Adding this config doesn't affect the behaviour of the network, but it does link it to a physical network where other Hyps can also link bridges to it.

Breaking out into the real world

Another possibility is that you may want to mix physical devices (eg. staging a server before putting it into Colo) into your environment, or stretch networks across multiple Hypervisors.

This is something I've done previously simply by using a real "bridge_port" to a physical network device (maybe a VLAN) to connect in the other devices.

More fun with Bridges & VLANs

Following this I've done another post taking VLANs with Linux Bridges further with various combinations of Trunked Bridges, Single VLAN Bridges off a Trunk Bridge etc.

Comments:




Are you human? (reduces spam)
Note: Identity details will be stored in a cookie. Posts may not appear immediately