This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

[DEPRECATED] Demo: Implementing the OpenStack Design Guide in the Cumulus Workbench


IMPORTANT! This article has been deprecated. For current information about using Cumulus Linux with OpenStack, please read our OpenStack Solutions page.



This article discusses how you can easily implement the configuration discussed in the OpenStack and Cumulus Networks Validated Design Guide in the Cumulus Workbench. This demo illustrates the deployment of OpenStack on network switches running Cumulus Linux. Plus it emphasizes the use of CLAG (Chassis Link Aggregation) in a Layer 2 environment.

This demo package uses Puppet to quickly run through almost all of the manual configuration steps listed in the design guide. It automates steps 1-9 of the design guide; you only have to start the VMs using the OpenStack Horizon Web interface.


Features Demonstrated

Source Code

Network Diagrams

The topology has 2 spine switches, 2 leaf switches with 2 server hosts:

Files and Descriptions

File Description
/var/www/ ZTP automation script to install ssh authorized keys; installs and triggers puppet
/var/www/ Network topology file for PTM to validate against
/etc/puppet/modules/openstack/ Puppet configuration files for Openstack
/etc/puppet/manifests/site.pp Node specific manifests for puppet

Running the Demo

Preparing the Environment

Once you have logged into the workbench and accepted the end user license agreement, you can install Puppet code.

  1. Install the demo package on your workbench:
    root@wbench:~# apt-get update
    root@wbench:~# apt-get install cldemo-wbench-openstack-2lt22s-puppet
  2. Set up and run OpenVPN on your workbench.

Giving the Demo

  1. Restart all switches to install CLAG on all switches via Puppet. From the workbench, run:
    root@wbench:~# cw-swpower -a -o reset
  2. Use Puppet to install Ubuntu Server and OpenStack Horizon on each server host from the workbench. This step takes about 25 minutes:
    root@wbench:~# cw-pxehelper -s server1 -o ubuntuserver-trusty-puppet -n
    root@wbench:~# cw-pxehelper -s server2 -o ubuntuserver-trusty-puppet -n

    This step installs and configures all of the OpenStack services.

  3. Follow step 10 in the design guide to connect to the OpenStack Horizon Web UI and start the virtual machines. The OpenStack Horizon Web interface is at You connect to it via OpenVPN, which you set up earlier. The username is admin and the password is adminpw.
  4. Login, and under the project section select instances. 
  5. Click the Launch Instance button.
  6. Name your instance. Under "Instance Boot Source" select "Boot from Image." For "Image Name" select Cirros.  Then select "Launch."

    Sometimes, the interfaces on the Ubuntu hosts do not come up properly, running 'puppet agent -- test' on the Ubuntu servers can help fix it.

    root@server1:~# puppet agent --test

    root@server2:~# puppet agent --test

  7. To connect to your new instance, you can either ssh from server1 to the IP address listed (login as 'cirros' user. default password: 'cubswin:)', use 'sudo' for root), or click through the instance and use a web console. 

Using SSH port forwarding

Using OpenVPN is the recommended solution, however you can also use port forwarding to access the instances.  To forward, replace NNN with your workbench id (ex:406)

ssh -L 8080: -p 7NNN

Replace all instances of with http://localhost:8080/horizon

Repeating the Demo

To give the demo more than once, you must clear the overlay file system, clear the servers, and reboot the devices. The simplest way to do this is to just request a reprovisioning of your workbench; however, you may do this manually.

  1. Login to each switch and run:
    cumulus@leaf1$ sudo /usr/cumulus/bin/cl-img-clear-overlay -f 1 ,then reboot
  2. Clear the server operating system: 
    root@wbench:/# apt-get install cldemo-wbench-osinstaller-erasembr
  3. Reload server1 with:
    root@wbench:/# cw-pxehelper -s server1 -o erasembr -n
  4. Repeat the previous step for server2.
  5. When the switches reboot the host SSH keys will have changed, so you must remove the known_hosts file on the workbench VM.
    root@wbench:/# rm /root/.ssh/known_hosts
  6. You must revoke the puppet certificates on the workbench:
    root@wbench:/# puppet cert clean --all

The environment is now in a state where you can repeat the demo.


This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk