[RETIRED] Demo: OpenStack + Cumulus VX "Rack-on-a-Laptop" Part I (L2+MLAG)

Follow

Important: This demo has been retired. Please consult the updated version.

This demo illustrates the dynamic provisioning of VLANs using a virtual simulation of two Cumulus VX Linux leaf switches and two CentOS 7 (RDO Project) servers as comprising an OpenStack environment. For simplicity, the Controller Node, Dashboard Node, Network Node, and Compute Node have been combined on a single server instance. In some cases, it may be preferable to split them out in a production environment.

{{table_of_contents}}

Overview

The Cumulus Modular Layer 2 (ML2) Mechanism Driver for OpenStack resides on the OpenStack Controller Node and provisions VLANs on demand. The ML2 driver queries the HTTP API server residing on the Cumulus Linux switch. This results in instances (virtual machines) communicating with each other across multiple switches without the need to pre-configure the top of rack switch beforehand. Without the Cumulus ML2 Mechanism Driver, VLANs would need to be preconfigured on Cumulus top of rack switches and between Layer 2 inter-switch links. In this case, the VLAN range defined on the top of rack switch will mirror the range defined in the /etc/neutron/plugins/ml2/ml2_conf.ini file located in the OpenStack Network Node.

For more details on how to configure an OpenStack environment with static VLANs, refer to the Cumulus Networks OpenStack Validated Design Guide.

Prerequisites

This demo requires a basic level of OpenStack knowledge. If you are new to OpenStack, read the Cumulus Networks OpenStack Validated Design Guide, which provides an overview of OpenStack as it pertains to Cumulus Linux.

System Requirements

This demo runs on Oracle VM VirtualBox with the following minimum requirements:

Feature Minimum Requirements
Operating system Windows/Mac OS/Linux
Oracle VirtualBox Version 5.0
RAM

8GB

Tip: Shut down all other memory-intensive apps before running the demo.

Hard disk 40GB
CPU Intel Core i5
Web browser Chrome, Firefox, Safari

Preparing the Environment

  1. Download the OpenStack demo OVA file from the Cumulus Networks Box.com. The image file is approximately 3GB in size.

  2. Launch Oracle VM Virtualbox.

    1. Import the OVA by selecting File -> Import Appliance…

      Note: If you are reinstalling the demo, make sure you previously deleted the VMs and all associated files.

    2. Accept the default values when prompted, during the configuration of the appliance.

      Important: Ensure the following default option is unchecked when prompted: Reinitialize the MAC address of all network cards.

    Note: The import process can take up to 2-3 minutes, depending on the hardware.

  3. Start all four virtual machines imported into VirtualBox.

    Note: The start order does not matter, however you should wait a couple of minutes for the four VMs to complete the boot process.

    Virtual Machine Information

    VM Name OS Purpose
    RDO Server1 Centos7/RDO (Liberty Release)

    RDO Project Network/Controller/Compute Node. Uses Linux bridge and not OVS bridge for simplicity.

    Note: For simplicity, the network node, controller, and a compute node have been combined on a single server. These are typically separate in a more realistic environment.

    RDO Server2 Centos7/RDO
    (Liberty Release)
    RDO Project Compute Node
    CumulusLeaf1 Cumulus VX 2.5.3 Top of Rack switch 1
    CumulusLeaf2 Cumulus VX 2.5.3 Top of Rack switch 2
  4. Open the following browser tabs, once all four VMs are running, to log into each server and switch using the username and passwords provided:

    Tab URL Application Authentication
    http://localhost:8080 Horizon Dashboard demo/cumulus
    http://localhost:8800 Server1 - Controller / Network Node / Compute Node cumulus/cumulus
    http://localhost:8801 Server2 - Compute Node cumulus/cumulus
    http://localhost:8802 Cumulus Leaf 1 cumulus/cumulus
    http://localhost:8803 Cumulus Leaf 2 cumulus/cumulus

    Note: These browser tabs are provided for your convenience so that you do not need to use the consoles provided by VirtualBox.

    Important: If you do not log in through the browser, it may time out after 60 seconds.

    Note:The browser view of the switches and Linux compute nodes may not be typical and secure at a customer site. This is for demo simplicity and convenience purposes. However, the Horizon Dashboard is a browser-based UI that will be used by OpenStack customers.

  5. Run the following commands on the Cumulus leaf switches to confirm the baseline configuration:

    $ ifquery -a
    auto lo 
    iface lo inet loopback 
     
    auto eth0 
    iface eth0 
     address 192.168.100.X/24 
     post-up ip route add 0.0.0.0/0 via 192.168.100.1 
     
    auto swp32s0 
    iface swp32s0 
     
    auto bond0 
    iface bond0 
     mstpctl-portnetwork yes 
     bond-miimon 100 
     bond-lacp-rate 1 
     bond-min-links 1 
     bond-slaves glob swp17-18 
     bond-xmit-hash-policy layer3+4 
     mstpctl-bpduguard no 
     bond-mode 802.3ad 
    
    $ netshow int
    --------------------------------------------------------------------                                                                                                                                                                                          
        Name     Speed        MTU  Mode            Summary                                                                                                                                                                                                        
    --  -------  ---------  -----  --------------  ----------------------------------                                                                                                                                                                             
    UP  bond0    2G          1500  Bond            Bond Members: swp17(UP), swp18(UP)                                                                                                                                                                             
    UP  eth0     1G          1500  Mgmt            IP: 192.168.100.4/24                                                                                                                                                                                           
    UP  lo       N/A        16436  Mgmt            IP: 127.0.0.1/8, ::1/128                                                                                                                                                                                       
    UP  swp17    1G          1500  BondMember      Master: bond0(UP)                                                                                                                                                                                              
    UP  swp18    1G          1500  BondMember      Master: bond0(UP)                                                                                                                                                                                              
    UP  swp32s0  1G(4x10G)   1500  IntTypeUnknown       
    
     $ netshow lldp
    --------------------------------------------------------------------                                                                                                                                                                                          
    Local Port    Speed      Mode                  Remote Port        Remote  Host                                                                                                                                                                                
    ------------  ---------  --------------  ----  -----------------  --------------                                                                                                                                                                              
    eth0          1G         Mgmt            ====  08:00:27:a6:6b:42  server1                                                                                                                                                                                     
                                             ====  eth0               leaf1                                                                                                                                                                                       
    swp17         1G         BondMember      ====  swp17              leaf1                                                                                                                                                                                       
    swp18         1G         BondMember      ====  swp18              leaf1                                                                                                                                                                                       
    swp32s0       1G(4x10G)  IntTypeUnknown  ====  enp0s9             server2

    Each switch is connected to another switch via a bond and has one OpenStack server connected to it on swp32s0.

  6. Confirm the instances on the Horizon Dashboard Web Browser:

    • Click on the Instances tab under the System Menu section to confirm that there are instances running.
    • Click on the Admin -> Networks and Admin -> Routers section to confirm that there are no networks and routers shown.
    • Click on the System -> Hypervisor section to confirm that OpenStack can see two hypervisors (server1, server2).
  7. Confirm the altocumulus process is running. The configuration is located in /etc/altocumulus/config.yaml:

    cumulus@leaf2$ ps -ef | grep altocumulus 
    
    root 2067 1 0 Nov13 ? 00:00:01 /usr/bin/python /usr/bin/altocumulus-api --config-file /etc/altocumulus/config.yaml 
    
    cumulus@leaf2$ cat /etc/altocumulus/config.yaml 
    
    bind: 0.0.0.0 
    port: 8140 
    debug: False 
    trunk_interfaces: bond0 

    On the network node (server1), you should see the following Cumulus ML2 Mechanism Driver configuration in the ml2_conf.ini file:

    [cumulus@server1 ~]$ sudo grep -A 2 cumulus /etc/neutron/plugins/ml2/ml2_conf.ini 
    mechanism_drivers = linuxbridge,cumulus 
    # Example: mechanism_drivers = openvswitch,mlnx 
    # Example: mechanism_drivers = arista
    -- 
    [ml2_cumulus] 
    switches=leaf1,leaf2

Running the Demo

Demo One: Single Tenant, One Network

In this scenario, OpenStack Heat is used to create one broadcast domain that spans two compute nodes, and each compute node has one OpenStack VM in the broadcast domain. A broadcast domain is created by OpenStack; it picks a VLAN number from a range provided by Neutron. On the compute nodes (server1 and server2), a new bridge is created that contains the interface to the OpenStack VM and an interface on the switch-facing port.

On each Cumulus switch, a corresponding VLAN interface is automatically created on the inter-switch link (bond0) and the OpenStack server-facing port (swp32s0).

Below is a diagram showing what is done:

To run this demo:

  1. Log into RDO Server1 (http://localhost:8800).

  2. Follow the instructions in the Message of the Day that is displayed via the /etc/MOTD file.

    [cumulus@server1 ~]$ cd $HOME/cumulus_demo
    [cumulus@server1 ~]$ ./one_tenant_subnet_demo.sh

Using the provided demo script, you can: start, verify, and destroy tenant networks. The demo script walks through the steps you need to take to view the demo in action:

  1. Log into the Horizon Dashboard and console into the VMs after they are created.
  2. Ping the VMs.
  3. Watch the bridge changes on the switch using the Linux command watch and netshow

Verification

The following screenshots from the OpenStack Horizon Dashboard and from a leaf switch show what happens after provisioning a single subnet in a single tenant.

Demo Two: Single Tenant, Two Networks

This demo is similar to the one above, but instead of creating only one broadcast domain, it creates two subnets in a single tenant, and places two OpenStack VMs on each subnet. It also uses Openstack Heat to perform this task.

Below is a diagram showing what is being done:

To run this demo:

  1. Log into RDO Server1 (http://localhost:8800).

  2. Follow the instructions in the Message of the Day that is displayed via the /etc/MOTD file.

    cd $HOME/cumulus_demo
    ./two_tenant_subnets_demo.sh

Using the demo script, you can: start, verify, and destroy tenants and subnets. The demo script walks through the steps required:

  1. Login to the Horizon Dashboard and console into the VMs after they are created.
  2. Ping the VMs.
  3. Watch the bridge changes on the switch using the Linux command watch and netshow.

Verification

The following are diagrams from the OpenStack Horizon Dashboard that show the results after provisioning two networks in a single tenant:

Caveats

  • This is for demo purposes only, and not for production use-cases. There are known issues with the Cumulus ML2 Mechanism Driver based on current testing by Cumulus Networks associates

  • Do not reboot the altocumulus-api

    daemon running on each Cumulus VX instance, nor the Cumulus VX instances themselves. If this occurs, re-run the demo by selecting the Start option (Press 1) from the demo menu.
  • This demo is for single attached servers only, not demos containing MLAG

  • The Cumulus ML2 Mechanism Driver depends on LLDP to discover the switch ports connected to an OpenStack server. Bond interfaces do not have LLDP information. The logic to review LLDP switch information, and determine if the interface with the matching ifName is part of a bond, is not part of the driver’s logic.

  • LLDP is used to determine which switch port needs a new VLAN or a VLAN to be removed. We recommend setting up PTM with a topology.dot file to confirm that LLDP is correctly set on all ports facing the OpenStack servers, as configuring LLDP on Centos/Red Hat can be complicated.

  • The demo supports classic bridging only, as the Cumulus ML2 Mechanism Driver does not support VLAN-aware configurations.

  • The Cumulus ML2 Mechanism Driver stores the state in RAM. Rebooting either the API server on the Cumulus VX instance or the instance itself will require the demo to be restarted. Run the demo again using the Start option in the demo menu. The script will destroy the previous OpenStack environment and then recreate it.

FAQs

Q: How can I get help on running this demo?
A: Although this demo is unsupported, feel free to join in the discussion on the Open Networking Community post located at: https://community.cumulusnetworks.com/cumulus/topics/demo-openstack-cumulus-vx-rack-on-a-laptop

Q: How do I run this in my test / production OpenStack environment?

A: The Cumulus ML2 Mechanism driver is still in development, and not yet product-ready.

Q: Where can I download the Cumulus ML2 Mechanism Driver components?

A: The driver is an open source project, available from the Cumulus Networks' Github site. The driver comes in two parts: the first piece runs on the OpenStack network node, while the second runs on the Cumulus switch.

Note: For the network node software, a working .deb and RPM have not yet been validated, and so are not available at this time.

Q: After a Cumulus VX instance boots up it reports an error message stating that DBUS is not installed. Is this an error of concern?

A. No. This is a benign error, and can be ignored.

Q: In the Single Tenant, Two Network Demo, when I run netshow interface or ip addr show, the default gateways of the OpenStack instances (VM) are not present. Where are the default gateways of the subnet?

A: The default gateways of the subnets are located in an IP namespace located on the network node (server1). To view the router config, run the following command:

[cumulus@server1] source $HOME/keystonerc_demo
[cumulus@server1 cumulus_demo(keystone_demo)]$ sudo ip netns exec `ip netns | grep qrouter` netshow int 
-------------------------------------------------------------------- 
To view the legend, rerun "netshow" cmd with the "--legend" option 
-------------------------------------------------------------------- 
 Name Speed MTU Mode Summary 
-- -------------- ------- ----- ------------ ------------------------ 
UP lo N/A 65536 Loopback IP: 127.0.0.1/8, ::1/128 
UP qr-4a01ecf9-f4 10G 1500 Interface/L3 IP: 10.100.1.1/24 
UP qr-be2c64c5-f1 10G 1500 Interface/L3 IP: 10.100.2.1/24

Q: When I reboot the OpenStack server using the reboot or shutdown commands, the VM appears to hang. What do I do?

A: Force a reload via the Virtualbox Manager.

Q: Is the API Server running on the Cumulus switch secure?

A: No, it is not:

  • Anyone with the knowledge of the switch IP and API port server number can send API calls to the switch and add bridges and VLANs.
  • The API server runs as root, which is a security risk.
  • The communication from the OpenStack controller (Neutron component) is unencrypted and unauthenticated.
  • The Neutron server sends "http", not "https" requests to the API server on the Cumulus switch.
  • There is no password or public key authentication.

Q: May I re-distribute this demo OVA file to customers and prospects?

A: Yes. The OVA contains language that it is being provided on an as-is basis with no warranty nor support.

Have more questions? Submit a request

Comments

Powered by Zendesk