This knowledge base has moved to the documentation site. Please visit the knowledge base here for the most up to date content. This site is no longer maintained.

Cumulus RMP 3.4.3 Release Notes



These release notes support Cumulus RMP 3.4.3 and describe currently available features and known issues.

Cumulus RMP 3.4.3 supports these features and is available on the Penguin Computing Arctica 4804IP-RMP out-of-band switch.

Stay up to Date 

  • Please sign in and click Follow above so you can receive a notification when we update these release notes.
  • Subscribe to our product bulletin mailing list to receive important announcements and updates about issues that arise in our products.
  • Subscribe to our security announcement mailing list to receive alerts whenever we update our software for security issues.


What's New in Cumulus RMP 3.4.3

Cumulus RMP 3.4.3 contains critical bug fixes only. To see the list of new platforms, features and improvements in Cumulus RMP 3.4.x, read the Cumulus RMP 3.4.0 release notes.

Note: The EA version of netq is not supported under Cumulus RMP 3.4.3.

Installing Version 3.4.3

If you are upgrading from version 3.0.0 or later, use apt-get to update the software.

Cumulus Networks recommends you use the -E option with sudo whenever you run any apt-get command. This option preserves your environment variables — such as HTTP proxies — before you install new packages or upgrade your distribution.

  1. Run apt-get update.
  2. Run apt-get upgrade.
  3. Reboot the switch.

New Install or Upgrading from Versions Older than 3.0.0

If you are upgrading from a version older than 3.0.0, or installing Cumulus RMP for the first time, download the Cumulus RMP 3.4.3 installer for Broadcom switches from the Cumulus Networks website, then use ONIE to perform a complete install, following the instructions in the user guide.

Note: This method is destructive; any configuration files on the switch will not be saved, so please copy them to a different server before upgrading via ONIE.

Important! After you install, run apt-get update, then apt-get upgrade on your switch to make sure you update Cumulus RMP to include any important or other package updates.


You can read the technical documentation here.

Issues Fixed in Cumulus RMP 3.4.3

The following is a list of issues fixed in Cumulus RMP 3.4.3 from earlier versions of Cumulus RMP.

Release Note ID Summary Description

RN-680 (CM-18029) 
NCLU `net del` followed by `net add` doesn't apply configuration correctly 

This issue occurrs only when deleting a while interface rather than specific content. For example, if you want to change an IP address on an interface, you would first delete it by running net del interface swp1 ip address then running net add interface swp1 ip address

However, if you simply removed the interface by running net del interface swp1 then adding it back with the different IP address by running net add interface swp1 ip address, you would encounter this issue.

This issue has been fixed in Cumulus RMP 3.4.3.

RN-681 (CM-18217) 
NCLU `net show configuration commands` returns traceback/RuntimeError 

One reason this issue could occur if you used NCLU to add multiple bonds with names that did not end in a digit (such as "leaf6cd") and had non-default MTU specified.

This issue has been fixed in Cumulus RMP 3.4.3.

RN-684 (CM-17698) 
Default RMP configuration is not compatible with NCLU due to presence of a glob 

If you use NCLU to update a switch port configuration in Cumulus RMP, you cannot commit the change, and errors like the following get returned:

ERROR: 'ifreload -a' failed due to:
warning: bridge: error parsing glob expression 'swp1' (supported glob syntax: swp1-10.300 or swp[1-10].300  or swp[1-10]sub[0-4].300
error: cmd 'ip link set dev swp1-48 master bridge' failed: returned 1 (Cannot find device "swp1-48"
error: bridge: bridge port swp1-48 does not exist

This is due to the default Cumulus RMP configuration, which uses a glob when assigning the switch ports to the bridge. NCLU did not support globs in Cumulus RMP 3.4.2 or earlier.

As of Cumulus RMP 3.4.3, globs are supported in NCLU.

RN-686 (CM-18461) 
In NCLU, the SNMP module was not enabled by default 

As of Cumulus RMP 3.4.3, the SNMP module is enabled by default unless explicitly disabled by the user.

Known Issues in Cumulus RMP 3.4.3

Issues are categorized for easy review. Some issues are fixed but will be available in a later release.

Release Note ID Summary Description

RN-56 (CM-343)
IPv4/IPv6 forwarding disabled mode not recognized

If either of the following is configured:

net.ipv4.ip_forward == 0 


net.ipv6.conf.all.forwarding == 0 

The hardware still forwards packets if there is a neighbor table entry pointing to the destination.

RN-120 (CM-477)
ethtool LED blinking does not work with switch ports Linux uses ethtool -p to identify the physical port backing an interface, or to identify the switch itself. Usually this identification is by blinking the port LED until ethtool -p is stopped.

This feature does not apply to switch ports (swpX) in Cumulus RMP.

RN-121 (CM-2123)
ptmd: When a physical interface is in a PTM FAIL state, its subinterface still exchanges information Issue:
When ptmd is incorrectly in a failure state and the Zebra interface is enabled, PIF BGP sessions are not establishing the route, but the subinterface on top of it does establish routes.

If the subinterface is configured on the physical interface and the physical interface is incorrectly marked as being in a PTM FAIL state, routes on the physical interface are not processed in Quagga, but the subinterface is working.

Steps to reproduce:
cumulus@switch:$ sudo vtysh -c 'show int swp8' 
Interface swp8 is up, line protocol is up 
PTM status: fail
index 10 metric 1 mtu 1500 
 HWaddr: 44:38:39:00:03:88 
 inet broadcast 
 inet6 2001:cafe:0:38::1/64 
 inet6 fe80::4638:39ff:fe00:388/64 
cumulus@switch:$ ip addr show | grep swp8 
  mtu 1500 qdisc pfifo_fast state UP qlen 500 
  inet brd scope global swp8 
 104: swp8.2049@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP> 
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2049 
 105: swp8.2050@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP> 
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2050 
 106: swp8.2051@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP> 
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2051 
 107: swp8.2052@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP> 
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2052 
 108: swp8.2053@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP>
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2053 
 109: swp8.2054@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP> 
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2054
 110: swp8.2055@swp8: <BROADCAST,MULTICAST,UP,LOWER_UP>
  mtu 1500 qdisc noqueue state UP 
  inet brd scope global swp8.2055
cumulus@switch:$ bgp sessions: ,4 ,64057 , 958 , 1036 , 0 , 0 , 0 ,15:55:42, 0, 10472 ,4 ,64058 , 958 , 1016 , 0 , 0 , 0 ,15:55:46, 187, 10285 ,4 ,64059 , 958 , 1049 , 0 , 0 , 0 ,15:55:40, 187, 10285 ,4 ,64060 , 958 , 1039 , 0 , 0 , 0 ,15:55:45, 187, 10285 ,4 ,64061 , 958 , 1014 , 0 , 0 , 0 ,15:55:46, 187, 10285 ,4 ,64062 , 958 , 1016 , 0 , 0 , 0 ,15:55:46, 187, 10285 ,4 ,64063 , 958 , 1029 , 0 , 0 , 0 ,15:55:43, 187, 10285 ,4 ,64064 , 958 , 1036 , 0 , 0 , 0 ,15:55:44, 187, 10285 

RN-398 (CM-10379)
While upgrading Cumulus RMP, a prompt to configure grub-pc appears

While upgrading to the latest version of Cumulus RMP from version 2.5.5 or earlier, a prompt appears, asking you to choose onto which partitions to install the GRUB boot loader. 


  1. /dev/mmcblk0 (3783 MB; ???)       3. /dev/dm-2 (1610 MB; CUMULUS-SYSROOT1)
  2. - /dev/mmcblk0p3 (268 MB; /boot)  4. none of the above

(Enter the items you want to select, separated by spaces.)

GRUB install devices:


This prompt should not appear, and the issue will be fixed in a future release.

In the meantime, to work around this issue, choose option 1, /dev/mmcblk0 and continue the upgrade.

RN-597 (CM-15705)
sFlow doesn't generate flow samples to sflowd on Tomahawk-based switches At this time, sFlow is not supported on switches with Tomahawk ASICs. This is a known issue. 

RN-599 (CM-15949)
DHCRELAY automatically binds to eth0 when not specified in the configuration dhcrelay listens for all interfaces that have an IP, even if not configured to listen for that interface. This causes dhcrelay to bind to unspecified ports.

This behavior is expected, due to upstream configuration. The packet is dropped later in the process, as it is not coming from a configured port.

RN-602 (CM-)
sFlow ifSpeed incorrect in counter samples

Counter samples for an 80G bond (2 x 40G) exported from the switch show an interface speed (ifSpeed) of 14.464Gbps.

This issue is currently being investigated.

RN-605 (CM-15515)
Unable to change the bond-modes using ifup or ifreload When the bond mode is changed from 802.3ad to balance-xor or vice versa using ifup bondx or ifreload -a, the bond-mode does not change, and the following error is produced:
2017-03-23 21:39:37,495:  DEBUG:      autolib.netobjects: [cumulus@] sudo: ('ifup bond1',)
2017-03-23 21:39:37,926:  DEBUG:      autolib.netobjects: warning: error writing to file /sys/class/net/bond1/bonding/mode([Errno 39] Directory not empty)

This issue is being addressed in a later release.

RN-642 (CM-17107)
No reply to SNMP request (silently dropped) if the request is received on multiple interfaces

When an SNMP server has multiple network paths to reach the switch, it is expected behavior that the SNMP request would arrive at multiple interfaces. However, it has been seen that Cumulus RMP silently drops any SNMP requests that arrive at an interface that does not match the return RIB lookup.

This issue occurs only if asymmetric routing is configured, where an SNMP request is received on multiple interfaces of the polled object.

To work around this issue, avoid asymmetric routing to the interfaces in question and configure a static route to pin the traffic to one link/bundle.



This support portal has moved

Cumulus Networks is now part of the NVIDIA Networking Business Unit! The NVIDIA Cumulus Global Support Services (GSS) team has merged its operations with the NVIDIA Mellanox support services team.

You can access NVIDIA Cumulus support content from the Mellanox support portal.

You open and update new cases on the Mellanox support portal. Any previous cases that have been closed have been migrated to the Mellanox support portal.

Cases that are still open on the Cumulus portal will continue to be managed on the Cumulus portal. Once these cases close, they will be moved to the Mellanox support portal.

Powered by Zendesk