This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

[RETIRED] Demo: iBGP Using Ansible in the Cumulus Workbench


Important! This article has been retired. Check out our GitHub site for the latest demos.

Using Zero Touch Provisioning and Ansible, a set of nodes can be automatically configured to establish iBGP in Quagga, and to leverage PTM to verify the topology. Ansible collects the active state from the nodes and provides it via the management workstation.


Demonstrated Features

Supported Topologies

  • 2-Switch
  • 2-Spine + 2-Leaf (T2)


  • cldemo-wbench-ibgp-ansible

Source Code

Network Diagrams


2-Spine + 2-Leaf(T2):

File Descriptions

File Description
/var/www/ansible_authorized_keys Public authorization key for root user to manage from workbench host
/var/www/ ZTP automation script to install root user authorized key file.
/var/www/ Network topology file for PTM to validate against.
List of nodes Ansible will manage.
/home/cumulus/example-ibgp-ansible/handlers/main.yml The main handlers file for Ansible that defines how to accomplish the required tasks.
/home/cumulus/example-ibgp-ansible/roles/ibgp/tasks/main.yml The main tasks file for Ansible that sets out what needs to be done on the nodes.
The directory that holds the templates that Ansible uses to build various configuration files used by the nodes.
The primary file that Ansible uses to apply the overall policy to the site.

Running the Demo

Preparing the Environment

Ansible can be installed with the iBGP configuration from the workbench. To install Ansible:

  1. Log into the Cumulus workbench and accept the End User License Agreement

  2. As the root user, update the workbench

    root@wbench:~# apt-get update
  3. Install the Ansible package for iBGP

    root@wbench:~# apt-get install cldemo-wbench-ibgp-ansible
  4. Navigate into the Ansible example directory

    root@wbench:~# cd /home/cumulus/example-ibgp-ansible/
  5. Restart each switch in order for the autoprovisioning script needed for Ansible to manage them to start

    root@wbench:/home/cumulus/example-ibgp-ansible# cwng-swpower -a -o reset
  6. Ping each node with Ansible, and follow the prompts to accept the SSH keys

    root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m ping

Confirming the Environment is Ready

To confirm the environment is ready for the demo, you can:

  • Follow along via the console for each switch:

    root@wbench:/home/cumulus/example-ibgp-ansible# tail -f /var/log/autoprovision
  • Check the status of the environment:

    root@wbench:/home/cumulus/example-ibgp-ansible# grep complete /var/lib/cumulus/autoprovision.conf

Giving the Demo

To start Ansible, and provision the nodes:

  1. Run the following command on the workbench:

    root@wbench:/home/cumulus/example-ibgp-ansible# ansible-playbook -i hosts site-ibgp.yml

    Ansible will now run through the set of actions that it has been configured to perform. These include:

    • Setting the MOTD of the nodes
    • Performing all of the steps necessary to install the Cumulus Linux license and insure that switchd restarts
    • Setting the configuration of all of the interfaces
    • Providing ptmd with a
    • Enabling quagga with BGP turned on
    • Restarting all of the services that have been provided with new configurations
  2. Once the services have restarted, log into a switch:

    root@wbench:/home/cumulus/example-ibgp-ansible# ssh leaf1

The following commands can now be run to verify the PTM and LLDP neighbors, and to observe output:

  • Verify the neighbors:

    root@leaf1:~# ptmctl

    Four active links should be present; two to each spine.

  • Display the BGP summary, showing four BGP peers, two to each spine:

    root@leaf1:~# cl-bgp summary
  • Check the state of all switches using Ansible:

    • Display the ptmctl output for each node, to show the four active links, two to each neighbor:

      root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m shell -a "ptmctl"
    • Display the four BGP peer outputs, one for each node, showing the four active peers, with two to each partner node:

      root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m shell -a "cl-bgp summary"
    • Display the route table output for each node, to show all local and remote links on all of the other nodes:

      root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m shell -a "ip route show"

In the 2-spine/2-leaf (T2) topology, Ansible can also run commands on a smaller subset of nodes. The following example command displays the bridge table output for the two leaf nodes, to show the two VLANs that are defined:

root@wbench:/home/cumulus/example-ibgp-ansible# ansible leaf -i hosts -m shell -a "brctl show"

Note: Other commands can be run on each switch to output additional information, by replacing the command string in the quotes of the commands listed above.

Repeating the Demo

In order to run the demo more than once, Ansible can clear the overlay filesystem and reboot the devices:

  1. Remove the .license.txt file from each node:

    root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m shell -a "rm /mnt/persist/etc/cumulus/.license.txt"
  2. Clear the overlay and reboot the devices:

    root@wbench:/home/cumulus/example-ibgp-ansible# ansible all -i hosts -m shell -a "/usr/cumulus/bin/cl-img-clear-overlay -f 1; reboot"
  3. When the switches have rebooted, remove the known_hosts file on the workbench VM, to clear the host SSH keys:

    root@wbench:/home/cumulus/example-ibgp-ansible# rm /root/.ssh/known_hosts

The environment is now ready to run the demo again.

Command Reference Cheat Sheet

Description Topology Command
Switch to root user All
sudo -i
Insure that the apt cache is updated All
apt-get update
Install the demo files All
apt-get install cldemo-wbench-ibgp-ansible
Change into the Ansible demo directory All
cd /home/cumulus/example-ibgp-ansible/
Reset the nodes All
cwng-swpower -a -o reset
Ping the nodes to get the SSH keys All
ansible all -i hosts -m ping
Launch the primary Ansible deployment All
ansible-playbook -i hosts site-ibgp.yml
Connect to leaf1 All
ssh leaf1
Check PTM All
Check BGP peers All
cl-bgp summary
Exit to return to the MGMT host All
Check PTM on all of the nodes All
ansible all -i hosts -m shell -a "ptmctl"
Check BGP peers on all of the nodes All
ansible all -i hosts -m shell -a "cl-bgp summary"
Check the routing table of the nodes All
ansible all -i hosts -m shell -a "ip route show"
Check a leaf's VLAN setup 2-Spine NA
ansible leaf -i hosts -m shell -a "brctl show"
Remove the license key All
ansible all -i hosts -m shell -a "rm /mnt/persist/etc/cumulus/.license.txt"
Clear overlay and reboot the nodes All
ansible all -i hosts -m shell -a "/usr/cumulus/bin/cl-img-clear-overlay -f 1; reboot"
Remove your SSH known hosts All
rm /root/.ssh/known_hosts



This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk