This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

[RETIRED] Demo: iBGP Using Puppet in the Cumulus Workbench

Follow

Important! This article has been retired. Check out our GitHub site for the latest demos.


Using Zero Touch Provisioning and Puppet, a set of nodes can be automatically configured to establish iBGP in Quagga, and leverage PTM to verify the topology.

{{table_of_contents}}

Demonstrated Features

Supported Topologies

  • 2-Switch
  • 2-Spine + 2-Leaf(T2)

Packages

  • cldemo-wbench-ibgp-puppet

Network Diagrams

2-Switch:

2-Spine + 2-Leaf(T2):

File Descriptions

File Description
/var/www/provision-puppet.sh ZTP automation script to update Puppet
/var/www/topology.dot Network topology file for PTM to validate against
/home/cumulus/example-ibgp-puppet Puppet configuration files

Running the Demo

Preparing the environment

Puppet can be installed with the iBGP configuration from the workbench. To install Puppet:

  1. Log into the Cumulus workbench and accept the End User License Agreement

  2. As the root user, update the workbench

    root@wbench:~# apt-get update
  3. Install the Puppet package for iBGP

    root@wbench:~# apt-get install cldemo-wbench-ibgp-puppet
  4. Restart each switch in order for the autoprovisioning script needed for Puppet to manage them to start:

    root@wbench:~# cwng-swpower -a -o reset

Confirming the Environment is Ready

Important: Restarting the switches takes about ten minutes.

To confirm the environment is ready for running the demo, you can view Puppet's progress via /var/log/syslog, follow along via the console of each switch, or check the status of the autoprovisioning script.

  • To follow progress via the console:

    root@wbench:~# tail -f /var/log/autoprovision
  • To check the status, and confirm the environment is ready:

    root@wbench:~# grep complete /var/lib/cumulus/autoprovision.conf

To confirm Puppet has run as part of the autoprovisioning script:

  • Display the ZTP log on a switch

    root@leaf1:~# tail /var/log/autoprovision

    If Puppet has not run:

    • Log into each switch, and run the puppet agent:

      root@leaf1:~# puppet agent --test

Giving the Demo

This demo will:

  • Set the MOTD of the nodes
  • Perform all of the steps necessary to install the Cumulus Linux license and insure that switchd restarts
  • Set the configuration of all of the interfaces
  • Provide ptmd with a topology.dot
  • Enable quagga with iBGP turned on
  • Restart all of the services that have been provided with new configurations

The following commands can now be run to verify the PTM and LLDP neighbors, and to observe output:

  • Verify the neighbors:

    root@leaf1:~# ptmctl

    Four active links should be present; two to each spine.

  • Display the BGP summary, showing 4 BGP neighbors, 2 to each spine:

    root@leaf1:~# cl-bgp summary
  • Display the installed routes:

    root@leaf1:~# ip route show

Repeating the Demo

To reset the demo:

  1. Log into each switch via SSH.

  2. On each switch, clear the overlay and reboot the devices:

    root@leaf1:~# cl-img-clear-overlay -f 1; reboot

The environment is now reset for the demo to be run again.

Command Reference Cheat Sheet

Description Topology Command
Switch to root user All
sudo -i
Ensure all of the repositories have the latest files All
apt-get update
Install the demo files All
apt-get install cldemo-wbench-ibgp-puppet
Restart the switches to have them install Puppet All
cwng-swpower -a -o reset
Connect to leaf1 All
ssh leaf1
Check PTM All
ptmctl
Check BGP peers All
cl-bgp summary
Exit to return to the MGMT host All
exit

Comments

This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk