This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

[RETIRED] Demo: OpenStack + Cumulus VX 3.1 "Rack-on-a-Laptop" Part I (L2+MLAG)


Important: This demo is not approved for use in a production environment and is for demonstration purposes only.

This demo illustrates the dynamic provisioning of VLANs using a virtual simulation of two Cumulus VX leaf switches and two CentOS 7 (RDO Project) servers; together they comprise an OpenStack environment. For simplicity, the controller node, dashboard node, network node, and compute node have been combined in a single server instance. In some cases, it may be preferable to split them out in a production environment.



The Cumulus Networks Modular Layer 2 (ML2) mechanism driver for OpenStack resides on the OpenStack controller node and provisions VLANs on demand. The ML2 driver queries the HTTP API server residing on the Cumulus Linux switch. This results in instances (virtual machines) communicating with each other across multiple switches without needing to pre-configure the top of rack switch beforehand. Without the ML2 mechanism driver, VLANs would need to be preconfigured on Cumulus Linux top of rack switches and between layer 2 inter-switch links. In this case, the VLAN range defined on the top of rack switch mirrors the range defined in the /etc/neutron/plugins/ml2/ml2_conf.ini file located on the OpenStack network node.

For more details on how to configure an OpenStack environment with static VLANs, refer to the Cumulus Networks OpenStack Validated Design Guide.


This demo requires basic knowledge of OpenStack. If you are new to OpenStack, read the Cumulus Networks OpenStack Validated Design Guide, which provides an overview of OpenStack as it pertains to Cumulus Linux.

System Requirements

This demo runs on Oracle VM VirtualBox with the following minimum requirements:

Feature Minimum Requirements
Operating system Windows/Mac OS/Linux
Oracle VirtualBox Version 5.0


Tip: Shut down all other memory-intensive apps before running the demo.

Hard disk 40GB
CPU Intel Core i5
Web browser Chrome, Firefox, Safari

Preparing the Environment

  1. Download the OpenStack demo OVA file from the Cumulus Networks The image file is approximately 3GB in size.

  2. Launch Oracle VM VirtualBox.

    1. Import the OVA by choosing File > Import Appliance…

      Note: If you are reinstalling the demo, make sure you delete the VMs and all associated files before you import the OVA file.

    2. Accept the default values when prompted during the configuration of the appliance.

      Important: Ensure the following default option is unchecked when prompted: Reinitialize the MAC address of all network cards.

    Note: The import process can take up to 2-3 minutes, depending on the hardware.

  3. Start all four virtual machines imported into VirtualBox.

    Note: The start order does not matter. However, you should wait a couple of minutes for the four VMs to complete the boot process.

    Virtual Machine Information

    VM Name OS Purpose
    RDO Server1 Centos7/RDO (Liberty Release)

    RDO Project Network/controller/compute node. Uses the Linux bridge and not the OVS bridge for simplicity.

    Note: For simplicity, the network node, controller, and a compute node have been combined on a single server. These are typically separate in a more realistic environment.

    RDO Server2 Centos7/RDO
    (Liberty Release)
    RDO Project compute node.
    CL31_Leaf1 Cumulus VX 3.1.0 Top of rack switch 1.
    CL31_Leaf2 Cumulus VX 3.1.0 Top of rack switch 2.
  4. Open the following browser tabs, once all four VMs are running, to log into each server and switch using the username and passwords provided:

    Tab URL Application Authentication
    http://localhost:8080 Horizon Dashboard demo/cumulus
    http://localhost:8800 Server1 - Controller / Network Node / Compute Node cumulus/cumulus
    http://localhost:8801 Server2 - Compute Node cumulus/cumulus
    http://localhost:8802 Cumulus Leaf 1 cumulus/cumulus
    http://localhost:8803 Cumulus Leaf 2 cumulus/cumulus

    Note: These browser tabs are provided for your convenience so that you do not need to use the consoles provided by VirtualBox.

    Important: If you do not log in through the browser, it may time out after 60 seconds.

    Note: The browser view of the switches and Linux compute nodes may not be typical and secure at a customer site. This is for demo simplicity and convenience purposes. However, the Horizon Dashboard is a browser-based UI that is used by OpenStack customers.

  5. Run the following commands on the Cumulus leaf switches to confirm the baseline configuration:

  6. cumulus@leaf1:~$ sudo ifquery -a
    auto lo 
    iface lo inet loopback 
    auto eth0 
    iface eth0 inet dhcp
    auto swp1 
    iface swp1 
     address 192.168.100.X/24 
    auto swp4 
    iface swp4 
    auto bond0 
    iface bond0 
     mstpctl-portnetwork yes 
     bond-miimon 100 
     bond-lacp-rate 1 
     bond-min-links 1 
     bond-slaves glob swp2-3
     bond-xmit-hash-policy layer3+4 
     mstpctl-bpduguard no 
     bond-mode 802.3ad 
    cumulus@leaf1:~$ sudo  netshow int
        Name    Speed      MTU  Mode           Summary                                                                                                      
    --  ------  -------  -----  -------------  --------------------------------                                                                             
    UP  lo      N/A      65536  Loopback       IP:, ::1/128                                                                                     
    UP  eth0    1G        1500  Mgmt           IP:                                                                                        
    UP  swp1    1G        1500  Interface/L3   IP:                                                                                         
    UP  swp4    1G        1500  NotConfigured                                                                                                               
    UP  bond0   2G        1500  Bond           Bond Members: swp2(UP), swp3(UP) 
    cumulus@leaf1:~$ sudo netshow lldp
    Local Port    Speed    Mode                 Remote Port        Remote  Host                                                                             
    ------------  -------  -------------  ----  -----------------  --------------                                                                           
    eth0          1G       Mgmt           ====  eth0               leaf2                                                                                    
    swp1          1G       Interface/L3   ====  08:00:27:a6:6b:42  server1                                                                                  
                                          ====  swp1               leaf2                                                                                    
    swp2          1G       BondMember     ====  swp2               leaf2                                                                                    
    swp3          1G       BondMember     ====  swp3               leaf2                                                                                    
    swp4          1G       NotConfigured  ====  enp0s9             server1

    Each switch is connected to another switch via a bond and has one OpenStack server connected to it on swp4.

  7. Confirm the instances on the Horizon Dashboard Web Browser:

    • Click the Instances tab under the System Menu section to confirm that instances are running.
    • Click the Admin > Networks and Admin > Routers sections to confirm that no networks and routers are shown.
    • Click the System > Hypervisor section to confirm that OpenStack can see two hypervisors (server1 and server2).
  8. On the Cumulus leaf switches, start the REST API server to handle the configuration commands from the server. 

    1. First, install the REST API using the following commands:
      1. Uncomment the early access repository lines and save the file:
        #deb CumulusLinux-3-early-access cumulus
        #deb-src CumulusLinux-3-early-access cumulus
      2. Run the following commands in a terminal to install the early access packages:
        cumulus@switch:~$ sudo apt-get update
        cumulus@switch:~$ sudo apt-get install python-falcon python-cumulus-restapi
    2. The REST API server configuration is located in /etc/restapi.conf:

      cumulus@leaf1$ systemctl restart restserver
      root@leaf1:/home/cumulus# ps -aux | grep restserver                                                                                                     
      root      1678  1.4  4.4  61888 19232 ?        Ss   05:32   0:00 /usr/bin/python2.7 /usr/bin/restserver                                                 
      root      1683  0.0  3.4  61888 14980 ?        S    05:32   0:00 /usr/bin/python2.7 /usr/bin/restserver                                                 
      root      1684  0.0  3.4  61888 14980 ?        S    05:32   0:00 /usr/bin/python2.7 /usr/bin/restserver                                                 
      root      1685  0.0  3.4  62024 14984 ?        S    05:32   0:00 /usr/bin/python2.7 /usr/bin/restserver                                                 
      root      1686  0.0  3.4  62032 14984 ?        S    05:32   0:00 /usr/bin/python2.7 /usr/bin/restserver                                                 
      root      1688  0.0  0.5  12728  2236 pts/0    S+   05:32   0:00 grep restserver  
  9. Review the contents of the restapi.conf file:
    cumulus@leaf2$ cat /etc/restapi.conf 
    # Root helper application.
    # Where to store log files
    [NETQ] #server_address =
    [ML2] #local_bind = #service_node = # Add the list of inter switch links that # need to have the vlan included on it by default # Not needed if doing Hierarchical port binding trunk_interfaces = bond0
  10. On the network node (server1), you should see the following Cumulus ML2 mechanism driver configuration in the ml2_conf.ini file:

    [cumulus@server1 ~]$ sudo grep -A 2 cumulus /etc/neutron/plugins/ml2/ml2_conf.ini 
    mechanism_drivers = linuxbridge,cumulus 
    # Example: mechanism_drivers = openvswitch,mlnx 
    # Example: mechanism_drivers = arista
    You should also see the following configuration in ml2_conf_cumulus.ini
    [root@server1 cumulus]# cat /etc/neutron/plugins/ml2/ml2_conf_cumulus.ini                                                                               

Running the Demo

Demo One: Single Tenant, One Network

In this scenario, OpenStack Heat is used to create one broadcast domain that spans two compute nodes, and each compute node has one OpenStack VM in the broadcast domain. A broadcast domain is created by OpenStack; it picks a VLAN number from a range provided by Neutron. On the compute nodes (server1 and server2), a new bridge is created that contains the interface to the OpenStack VM and an interface on the switch-facing port.

On each Cumulus Linux switch, a corresponding VLAN interface is automatically created on the inter-switch link (bond0) and the OpenStack server-facing port (swp4).

The following diagram illustrates what is occurring in the configuration:

To run this demo:

  1. Log into RDO Server1 (http://localhost:8800).

  2. Follow the instructions in the Message of the Day that is displayed via the /etc/MOTD file.

    [cumulus@server1 ~]$ cd $HOME/cumulus_demo
    [cumulus@server1 ~]$ ./

Using the provided demo script, you can start, verify, and destroy tenant networks. The demo script walks through the steps you need to take to view the demo in action:

  1. Log into the Horizon Dashboard and console into the VMs after they are created.
  2. Ping the VMs.
  3. Watch the bridge changes on the switch using the Linux command watch and netshow.


The following screenshots from the OpenStack Horizon Dashboard and from a leaf switch show what happens after provisioning a single subnet in a single tenant.

Demo Two: Single Tenant, Two Networks

This demo is similar to the one above, but instead of creating only one broadcast domain, it creates two subnets in a single tenant, and places two OpenStack VMs on each subnet. It also uses Openstack Heat to perform this task.

The following diagram illustrates what is occurring in the configuration:

To run this demo:

  1. Log into RDO Server1 (http://localhost:8800).

  2. Follow the instructions in the Message of the Day that is displayed via the /etc/MOTD file.

    cd $HOME/cumulus_demo

Using the demo script, you can start, verify, and destroy tenants and subnets. The demo script walks through the steps required:

  1. Login to the Horizon Dashboard and console into the VMs after they are created.
  2. Ping the VMs.
  3. Watch the bridge changes on the switch using the Linux command watch and netshow.


The following screenshots are from the OpenStack Horizon Dashboard. They show the results after provisioning two networks in a single tenant:


  • This is for demo purposes only, and not for production use cases. There are known issues with the Cumulus Networks ML2 mechanism driver based on current testing by Cumulus Networks associates.

  • Do not restart the REST server daemon running on each Cumulus VX instance, nor the Cumulus VX instances themselves. If this occurs, re-run the demo by selecting the Start option (press 1) from the demo menu.

  • This demo is for single-attached servers only, not demos containing MLAG.

  • The Cumulus Networks ML2 mechanism driver depends on LLDP to discover the switch ports connected to an OpenStack server. Bond interfaces do not have LLDP information. The logic to review LLDP switch information and determine if the interface with the matching ifName is part of a bond, is not part of the driver's logic.

  • LLDP is used to determine which switch port needs a new VLAN or a VLAN to be removed. Cumulus Networks recommends setting up PTM with a file to confirm that LLDP is correctly set on all ports facing the OpenStack servers, as configuring LLDP on Centos/Red Hat can be complicated.

  • The demo supports bridges in traditional mode only, as the Cumulus Networks ML2 mechanism driver does not support VLAN-aware configurations.

  • The Cumulus Networks ML2 mechanism driver stores the state in RAM. Rebooting either the REST API server on the Cumulus VX instance or the instance itself requires restarting the demo. Run the demo again using the Start option in the Demo menu. The script destroys the previous OpenStack environment and then recreates it.


Q: How can I get help on running this demo?
A: Although this demo is unsupported, feel free to join in the discussion on the Open Networking Community post located at:

Q: How do I run this in my test / production OpenStack environment?

A: The Cumulus Networks ML2 mechanism driver is still in development, and is not production ready at this time.

Q: Where can I download the Cumulus ML2 mechanism driver components?

A: The driver is an open source project available from the Cumulus Networks GitHub site. The driver runs on the OpenStack network node.

Q: In the Single Tenant, Two Network Demo, when I run netshow interface or ip addr show, the default gateways of the OpenStack instances (VM) are not present. Where are the default gateways of the subnet?

A: The default gateways of the subnets are located in an IP namespace located on the network node (server1). To view the router configuration, run the following command:

[cumulus@server1] source $HOME/keystonerc_demo
[cumulus@server1 cumulus_demo(keystone_demo)]$ sudo ip netns exec `ip netns | grep qrouter` netshow int 
To view the legend, rerun "netshow" cmd with the "--legend" option 
 Name Speed MTU Mode Summary 
-- -------------- ------- ----- ------------ ------------------------ 
UP lo N/A 65536 Loopback IP:, ::1/128 
UP qr-4a01ecf9-f4 10G 1500 Interface/L3 IP: 
UP qr-be2c64c5-f1 10G 1500 Interface/L3 IP:

Q: When I reboot the OpenStack server using the reboot or shutdown commands, the VM appears to hang. What do I do?

A: Force a reload via the VirtualBox Manager.

Q: Is the REST API server running on the Cumulus Linux switch secure?

A: No it is not secure because:

  • Anyone with the knowledge of the switch IP and API port server number can send API calls to the switch and add bridges and VLANs.
  • The REST API server runs as root, which is a security risk.
  • The communication from the OpenStack controller (Neutron component) is unencrypted and unauthenticated.
  • The Neutron server sends http, not https requests to the API server on the Cumulus Linux switch.
  • There is no password or public key authentication.

Q: May I re-distribute this demo OVA file to customers and prospects?

A: Yes. The OVA contains language that it is being provided on an as-is basis with no warranty nor support.


This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk