Hardware Setup of a VOLTHA Test Pod


In a testing setup rather than using a real RG or BNG emulated ones are deployed on a Linux development server:

  • The RG can be emulated by an lxc container (from now on client)

  • The BNG can be emulated by a Linux server

  • The AggSwitch is mandatory if control of the OLT is done with in-band mode, while optional if the OLT is controlled out of band, in this case the NNI port of the OLT will go directly into the emulated BNG linux server NIC card.

VOLTHA Lab Setup

VOLTHA Lab Setup

The image above represents the data plane connections in a LAB setup. It does not include the ``kubernetes`` cluster for simplicity, but the ``dev server`` listed above can be one of your ``kubernetes`` nodes.

What you’ll need to emulate E2E traffic is:

  • 1 x86 server with Ubuntu 16.04 and at least the following interfaces:

    • 1 1G Ethernet port

    • 1 10G Ethernet port (this can be a second 1G interface as long as you have a media converter)

Setting up a client

The first thing you need to do is to install lxd on your server. To do that you can follow this guide

Once lxd is successfully installed you need to initialize it with:

lxd init

we recommend to use all the provided default values.

Once lxd is initialized you can create a container and assign a physical Ethernet interface to the container:

lxc launch ubuntu:16.04 <name>
lxc config device add <name> eth1 nic name=eth1 parent=<physical-intf> nictype=physical


  • name is the desired container name. The convention used to identify which RG container is connected to an ONU is to use the ONU serial number as the lxc container name.

  • physical-intf is the name of the interface on the server where the ONU is physically connected

Once the container is created you can check it’s state with with lxc list:

|     NAME      |  STATE  |        IPV4        | IPV6 |    TYPE    | SNAPSHOTS |
| voltha-client | RUNNING | (eth0) |      | PERSISTENT | 0         |

Please make sure the container has an assigned IP or we it won’t be able to login and install the wpasupplicant tool inside the RG.

Once the container is running you need to enter it for configuration. To access the container run: lxc exec <name> /bin/bash

Once inside:

# activate the interface
ip link set eth1 up
# install the wpasupplicant tool
apt update
apt install wpasupplicant

NOTE: wpasupplicant is a Linux tool to perform 802.1X authentication. wpasupplicant documentation can be found here.

Create a configuration file for wpasupplicant in /etc/wpa_supplicant/wpa_supplicant.conf with the content:


NOTE: The configuration in this file is not really important if you are using the freeradius server provided as part of the VOLTHA helm charts. Do not worry if the certificates do not exist, they won’t affect authentication as that is password based.

At this point you’ll be able kickoff the authentication process (by sending EAPOL packets into the system) with the command:

wpa_supplicant -i eth1 -Dwired -c /etc/wpa_supplicant/wpa_supplicant.conf

If everything has been set up correctly, you should see output similar to this in the VOLTHA logs:

cord@node1:~$ kubectl logs -f -n voltha vcore-0 | grep -E "packet_indication|packet-in" | grep 888e
20180912T003237.453 DEBUG    MainThread adapter_agent.send_packet_in {adapter_name: openolt, logical_port_no: 16, logical_device_id: 000100000a5a0097, packet: 0180c200000390e2ba82fa8281000ffb888e01000009020100090175736572000000000000000000000000000000000000000000000000000000000000000000, event: send-packet-in, instance_id: compose_voltha_1_1536712228, vcore_id: 0001}

Setting up an emulated BNG on Linux

The emulated BNG needs to perform only two operations: DHCP and NAT.

To setup a NAT router on an Ubuntu 16.04 server you can look at this tutorial: http://nairabytes.net/linux/how-to-set-up-a-nat-router-on-ubuntu-server-16-04

To install a DHCP server you can follow this tutorial: http://nairabytes.net/linux/how-to-install-a-dhcp-server-in-ubuntu-server-16-04

Once the DHCP server is installed, you need to configure it.

Create Q-in-Q interfaces

On the interface that connects to the Agg Switch (upstream) you are going to receive double tagged traffic, so you’ll need to create interfaces to received it.

Supposing that your subscriber is using s_tag=111, c_tag=222 and the upstream interface name is eth2 you can use this commands to create it:

ip link set eth2 up
ip link add link eth2 name eth2.111 type vlan id 111
ip link set eth2.111 up
ip link add link eth2.111 name eth2.111.222 type vlan id 222
ip link set eth2.111.222 up
ip addr add dev eth2.111.222

Then you’ll need to tell the dhcp server to listen on that interface, you can do that by editing the file /etc/default/isc-dhcp-server so that it looks like:


NOTE that you can list multiple interfaces, separated by spaces, in case you have multiple subscribers in your setup

In the /etc/dhcp/dhcpd.conf config file, configure the IP address range to assign to the double tagged interface:

subnet netmask {
  option routers;
  option domain-name-servers;

Configuration for in-band OLT control

If OLT is being used in in-band connectivity mode, the document details the configuration aspects in ONOS and the aggregation switch to trunk/switch in-band packets from the OLT to BNG or Voltha.

In-band OLT software upgrade

If OLT with openolt agent is being used in in-band connectivity mode we provide the capability to execute SW updates of the image present on the device, the README provides the required details.