This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

Cumulus Workbench User Guide


The Cumulus Workbench is a pool of Cumulus Linux-compatible physical switches cabled into various topologies, and made accessible via remote access from a management virtual machine, for short-term training and testing.

Each workbench is dynamically provisioned, providing a clean, consistent user experience, and includes a number of demos for trialling various scenarios.


Available Configurations

The following network topologies are available on the workbench:

  • 2 switch
  • 4 switch, 2 server

2-Switch Topology


  • Two connected switches, one management system
  • SSH (in-band) and serial console (out-of-band) switch access

Possible use cases:

  • Single-switch uses + Prescriptive Topology Manager (PTM)
  • L2 protocol testing
  • Simple L3 protocol testing

4-Switch, 2-Server Topology (Leaf-Spine)


  • Four switches, connected as a 2-leaf, 2-spine network (folded CLOS topology); leafs can be Trident II (T2) or non-T2
  • Two servers
  • SSH (in-band) and serial console (out-of-band) switch access

Possible use cases:

  • 2-switch uses + complex L3 protocol testing

Getting Started

Reserve a Workbench

To reserve a workbench, contact your Cumulus Networks account manager, or follow this link. Working with a Cumulus Networks customer solutions engineer, they will review your requirements, assist in the development of test plans (if required), and reserve a lab at a suitable time.

Once the workbench is reserved, a confirmation email will be sent, with a link to your full reservation details. In addition, your account manager and CSE can help answer any questions that may arise during your workbench time.

Connect to the Workbench

The introductory email points you to the reservation landing page that contains your login credentials, including your workbench ID and password. Once you’ve received it, you can access the workbench.

Note: The EULA must be agreed to before you can see the reservation landing page.

  1. In a terminal, SSH into the workbench, using the login credentials provided in the email:

  2. Accept the EULA to continue.

    Note: The EULA will appear the first time you log into the workbench.

Once the EULA has been accepted, the workbench is ready for use.

Review the Available Resources

The workbench resources are documented in a JSON file called /var/www/wbench.json. The file includes a list of the connected switches, any servers that are connected, as well as relevant details such as hostnames, ports, IP addresses, and passwords.

An example wbench.json file for a 4-switch workbench is attached below: cw_json.txt.

Install a License

A workbench VM license will need to be installed before continuing, as the Cumulus Workbench switches are unlicensed when first provisioned. The licenses are stored in /var/www/license.lic.

  1. In a terminal, run the following command to install the license:

    sudo cl-license -i http://wbench/license.lic
  2. Restart the system:

    cumulus@switch:~$ sudo reboot

After the switch reboots, all front panel ports will be active. The front panel ports are identified as switch ports, and show up as swp1, swp2, and so forth.

Workbench Management

Navigating the Workbench

The various workbench systems are navigated using a console connection, either by connecting to the host or switch's serial console, or by using cwng-mux.

Connecting to a Serial Console

To connect to a console, run the following comand on the workbench, replacing console-number with the relevant console number:

cumulus@wbench:~$ telnet console-number

The table below lists how to connect to each device:

Device Command
leaf1 telnet 1001
leaf2 telnet 1002
spine1 telnet 2001
spine2 telnet 2002
server1 telnet 3001
server2 telnet 3002

Navigating the Workbench Using cwng-mux

The Cumulus Workbench can be navigated with cwng-mux, a wrapper for tmux that provides easy SSH and telnet access to the consoles on the workbench switches and servers.

To use cwng-mux, run the cwng-mux command in a terminal:

cumulus@wbench:~$ cwng-mux

Note: The terminal becomes a multi-window display once cwng-mux is running, with four windows displayed by default, or five if the workbench contains servers.

Window Content
0: Overview ASCII topology diagram and key binding reminder.
1: Workbench Shell / bash on the local workbench.
2: Switches Split pane window (1, 2, or 4 panes), connected via SSH to the serial console line of each switch.
3: Servers Optional, split pane window (1 or 2 panes), connected via SSH to server console lines.
4: Apache Log Log messages from tailing /var/log/apache/access.log. Useful to see ONIE install logging data.

At the bottom of the terminal window, a status bar displays the workbench number, topology type, reservation ID, and when the lab reservation expires. The blue background signifies which window has the current focus.

The default tmux commands are unchanged:

Sequence Action
Ctrl-b n Changes to next window (0 to 1, 1 to 2, and so on).
Ctrl-b o Changes to next pane within a window.
Ctrl-b d Detaches the tmux session, returning you to the wbench prompt.
Ctrl-b p Change to previous window
Ctrl-b ? display tmux key binding help

Two custom cwng-mux shortcuts allow you to quickly enter the username and password at the command prompt on a switch:

  • When prompted for the username, type ctrl-b + ctrl-c in quick succession to enter cumulus as the username.
  • When prompted for the password, type ctrl-b + ctrl-p in quick succession to enter CumulusLinux! as the password.

Power Cycling the Switches

Workbench switches can be power cycled with the cwng-swpower command.

Note: cwng-swpower includes a reinstall operation. When used, the switch is rebooted and the ONIE install mode is triggered.

cumulus@wbench:~$ cwng-swpower -h
Usage: cwng-swpower [options]

  -h, --help            show this help message and exit
  -o OPERATION, --operation=OPERATION
  -s SWITCHES, --switch=SWITCHES
                        Comma separated switch names, like leaf1,spine1
  -a, --all             All switches

GateOne Setup

  1. Open in a browser.

  2. Enter the username and password from the lab sheet when prompted.

  3. Click the terminal icon to launch your first terminal.

  4. Press return to take the default values for Host/IP and Port. Use the username and password to log in.

  5. To open a second window without entering the password again, use the “Duplicate Session” button in the terminal icon in the right-hand tool bar.

Workbench Setup and Configuration

Installing an Operating System (4-Switch, 2-Server Topologies Only)

If a 4-switch, 2-server topology has been reserved, an operating system should be installed on each host in the workbench. The following operating system installers are available to install on the workbench hosts:

  • Ubuntu Server 12.04 x64: cldemo-wbench-osinstaller-ubuntuserver-precise
  • Ubuntu Server 14.04 x64: cldemo-wbench-osinstaller-ubuntuserver-trusty
  • VMware ESXi 5.5u1: cldemo-wbench-osinstaller-vmwareesxi55

Note: Multiple operating systems can be installed on a single workbench, as different hosts can run different operating systems.


Important: The OS installer package depends on cldemo-wbench-tftpd. This package sets up PXELINUX; you can see the PXELINUX files in the listing below.

root@wbench:~# ls -l /var/lib/tftpboot/pxe/
total 100
-rwxr-xr-x 1 root root 14884 Apr  1  2010 chain.c32
-rwxr-xr-x 1 root root 54964 Apr  1  2010 menu.c32
-rwxr-xr-x 1 root root 16794 Apr  1  2010 pxelinux.0
drwxr-xr-x 2 1086 3000  4096 Jul 29 12:43 pxelinux.cfg
drwxr-xr-x 3 root root  4096 Jul 29 12:43 ubuntu-installer-trusty

The ubuntu-installer directory contains the files needed for the install:

root@wbench:~# ls -l /var/lib/tftpboot/pxe/pxelinux.cfg/
total 12
-rw-r--r-- 1 1086 3000 184 Jul 22 11:11 cfg-localhdd
-rw-r--r-- 1 1086 3000 463 Jul 22 11:11 cfg-ubuntuserver-trusty
-rw-r--r-- 1 root root 184 Jul 29 12:43 default

The PXELINUX menus are shown above. For each installed operating system, a menu is added, in this case cfg-ubuntuserver. A similar directory structure is present for ESXi 5.5.

Downloading the Server OS

  1. Connect to the workbench with SSH.
  2. Log into as the root user:
    cumulus@wbench:~$ sudo -i
  3. Update the host:
    root@wbench:~# apt-get -q -y update
  4. Download the operating system:
    • For Ubuntu Server 12.04 x64:
      root@wbench:~# apt-get install cldemo-wbench-osinstaller-ubuntuserver-precise
    • For Ubuntu Server 14.04 x64:
      root@wbench:~# apt-get install cldemo-wbench-osinstaller-ubuntuserver-trusty
    • For VMware ESXi 5.5u1:
      root@wbench:~# apt-get install cldemo-wbench-osinstaller-vmwareesxi55

Installing the Operating System

The cwng-pxehelper command, which ships with the tftpd package in the cldemo repository simplifies the installation process. The hosts are configured to boot from PXE first by default, followed by local disks. Using TFTP and PXE booting, the operating system can be installed on a host from the workbench.

cwng-pxehelper triggers a host to install an OS unattended, and after installation, configures the host to boot locally, rather than from PXE.

The installer packages download the OS either from the Internet or an internal mirror (in the case of ESXi), and extract it to the TFTP server on the workbench (in /var/lib/tftpboot).

To install the operating system, run the following command, replacing hostname and osname with the host and operating system names:

root@wbench:~# cwng-pxehelper -s hostname -o osname -n
  • Example: Ubuntu Server 14.04 x64:

    root@wbench:~# cwng-pxehelper -s server1 -o ubuntuserver-trusty -n
    * Copied PXELinux config for server1 / 01-00-25-90-2c-bd-30
    * Attempting IPMI ( PXE first and reboot
    * Power is on, setting PXE boot and power cycling
  • Example: VMware ESXi 5.5u1:

    root@wbench:~# cwng-pxehelper -s server2 -o esxi55 -n
    * Copied PXELinux config for server1 / 01-00-25-90-2c-aa-ff
    * Attempting IPMI ( PXE first and reboot
    * Power is on, setting PXE boot and power cycling

An unattended installation will now start on the host. The cwng-pxehelper script looks up the IPMI details and PXE MAC from the workbench JSON, then copies the relevant OS PXELinux config.

In the example output below, the cfg-ubuntuserver-trusty PXELinux config has been copied to the MAC of the server so that it runs when the server boots:

root@wbench:~# ls -l /var/lib/tftpboot/pxe/pxelinux.cfg/
total 16
-rw-r--r-- 1 root root 463 Jul 29 12:48 01-00-25-90-2c-bd-30
-rw-r--r-- 1 1086 3000 184 Jul 22 11:11 cfg-localhdd
-rw-r--r-- 1 1086 3000 463 Jul 22 11:11 cfg-ubuntuserver-trusty
-rw-r--r-- 1 root root 184 Jul 29 12:43 default

Logging in to the Host OS (4-Switch, 2-Server Topologies Only)

Each host OS uses the same credentials as a Cumulus Linux switch:

  • username: cumulus
  • password: CumulusLinux!

Securely Wiping a Host (4-Switch, 2-Server Topologies Only)

A third OS installer image, called cldemo-wbench-osinstaller-erasembr, is available to erase the master boot record on a host, to ensure any proprietary data used on the host is deleted.

  1. Log in as the root user:
    cumulus@wbench:~$ sudo -i
  2. Update the host:
    root@wbench:~# apt-get -q -y update
  3. Download the operating system:
    • For Ubuntu Server 12.04 x64:
      root@wbench:~# apt-get install cldemo-wbench-osinstaller-erasembr
    • Run cwng-pxehelper to erase the master boot record:
      root@wbench:~# cwng-pxehelper -s server1 -o erasembr
      * Removed existing config for server1 / 01-c8-1f-66-b8-dc-12
      * Copied PXELinux config for server1 / 01-c8-1f-66-b8-dc-12

Transferring Files To/From the Workbench

Files can be transferred between the workbench and an external server using the scp, or secure copy, command.

Note: The port number for the workbench must be specified when using scp, prepended with the number 1.

scp -P 7420 /remote/location/of/the/file


This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk