These release notes support Cumulus NetQ 1.1.0 and describe currently available features and known issues.
Stay up to Date
- Please sign in and click Follow above so you can receive a notification when we update these release notes.
- Subscribe to our product bulletin mailing list to receive important announcements and updates about issues that arise in our products.
- Subscribe to our security announcement mailing list to receive alerts whenever we update our software for security issues.
What's New in Cumulus NetQ 1.1.0
Cumulus NetQ 1.1.0 includes the following features:
- Container visibility with Host Pack: Integration with Docker Swarm provides service- and application-level insights in containerized environments.
- EVPN: Fabric-wide configuration checks and traces enhancements so you can verify that your overlay control plane is configured correctly.
- Layer 1 cables and optics: Monitor your physical layer, check the status of the inventory, check peer connections and investigate link mismatches and flapping.
- NetQ Notifier: Filter alerts and notifications based on severity, device and service, and redirect to the desired PagerDuty and Slack channels. Also, exporting NetQ data to ELK and Splunk is now generally available.
- Telemetry server: The telemetry server is now available as a KVM-based virtual machine, which runs on Ubuntu 16.04 and Red Hat Enterprise Linux 7.
- Standalone repository: NetQ packages are in their own separate repository.
Early Access Support
NetQ 1.1 also includes these early access features:
- Support for NetQ in the Facebook Backpack chassis: NetQ now runs across all the nodes in a Facebook Backpack chassis.
- Extending NetQ with custom commands: Create your own NetQ commands and codify your playbooks.
Cumulus NetQ 1.1 adds support for EVPN through the addition of
netq check evpn, and by enhancing existing commands to verify address families are configured correctly throughout the network fabric.
Caveats and Errata
- Cumulus NetQ 1.1 EVPN support can only be used with Cumulus Linux 3.4 and later releases. This is because
netq show bgp evpn vnirequires JSON output support, or
import-rtfield values will not be listed.
- NetQ 1.1 uses FRRouting as its routing suite; NetQ 1.1 does not work with Quagga or Cumulus Routing on the Host.
- NetQ 1.1 does not provide functionality to measure HER rate. This will be added in a later release.
- MTU checking to ensure jumbo frames is not provided.
export-rtchecking is not enforced.
- Cumulus NetQ 1.1 does not support measuring excessive MAC moves.
You can read the technical documentation here.
Issues Fixed in Cumulus NetQ 1.1.0
The following is a list of issues fixed in Cumulus NetQ 1.1.0.
|Release Note ID||Summary||Description|
|Rotten nodes are not highlighted in `netq show` commands||
If a node is in rotten state, it doesn't get highlighted in any
[email protected]:~$ netq show ip route 22.214.171.124 Route info about prefix 126.96.36.199 on host * Origin Table IP Node Nexthops Last Changed ------ -------- ---------------- ---------------- ------------------------- ---------------- 0 DataVrf1 0.0.0.0/0 msp1 Blackhole 47m ago 080 0 DataVrf1 0.0.0.0/0 msp2 Blackhole 47m ago >> From rotten node 080 0 DataVrf1 0.0.0.0/0 superm-redxp-01 Blackhole 46m ago >> From rotten node 080 0 DataVrf1 0.0.0.0/0 torb1 Blackhole 46m ago 080
This issue is fixed in Cumulus NetQ 1.1.0.
Known Issues in Cumulus NetQ 1.1.0
Issues are categorized for easy review. Some issues are fixed but will be available in a later release.
|Release Note ID||Summary||Description|
|Some platforms display "N/A" when `netq show inventory asic` is run||
NetQ does not return data for certain platforms when
[email protected]:~$ netq show inventory asic Node Vendor Model Model ID Core B/W Ports ------------------ -------- -------- ---------- ---------- ----------------------------- act-7412-03 N/A N/A N/A N/A N/A dell-s4000-04 Broadcom Trident2 BCM56854 720G 48 x 10G-SFP+ & 6 x 40G-QSFP+ dell-s4000-05 Broadcom Trident2 BCM56854 720G 48 x 10G-SFP+ & 6 x 40G-QSFP+ dell-z9100-03 Broadcom Tomahawk BCM56960 2.0T 32 x 100G-QSFP28 mlx-2700b-01 N/A N/A N/A N/A N/A qct-ix1-08 N/A N/A N/A N/A N/A qct-ly9rangeley-06 Broadcom Trident2 BCM56854 720G 48 x 10G-T & 6 x 40G-QSFP+ qct-ly9rangeley-07 Broadcom Trident2 BCM56854 720G 48 x 10G-T & 6 x 40G-QSFP+
|`netq NODE show stp topology` shows wrong information when on node bridge link is down||
When the hard node bridge link is brought down, running
No bridges found
|When stopping and starting mstpd, `netq show services changes` displays two events||
In this instance, the NetQ agent is reporting two events, because one reports the service is in "error" and the other says the service is "failed". However, in both cases the service is actually stopped, so the NetQ CLI shows the status as "n/a", causing them to look like two events without any difference.
There is no workaround at this time. While the issue is benign, it can potentially trigger an alarm event, even when the service did not actually change state.
[email protected]:~$ sudo systemctl stop mstpd ; sudo systemctl start mstpd [email protected]:~$ netq mlx-2700-03 show services mstpd changes Matching services records are: Node Service PID VRF Enabled Active Monitored Status Up Time DbState Last Changed ----------- --------- ----- ------- --------- -------- ----------- -------- --------- --------- -------------- mlx-2700-03 mstpd 10917 default yes yes yes ok 14s ago Add 2s ago mlx-2700-03 mstpd 0 default yes no yes n/a 22m ago Add 14s ago mlx-2700-03 mstpd 0 default yes no yes n/a 22m ago Add 17s ago mlx-2700-03 mstpd 9211 default yes yes yes ok 22m ago Add 22m ago mlx-2700-03 mstpd 0 default yes no yes n/a 24m ago Add 22m ago mlx-2700-03 mstpd 0 default yes no yes n/a 24m ago Add 22m ago mlx-2700-03 mstpd 9015 default yes yes yes error 24m ago Add 22m ago mlx-2700-03 mstpd 9015 default yes yes yes ok 24m ago Add 24m ago mlx-2700-03 mstpd 0 default yes no yes n/a 56m ago Add 24m ago mlx-2700-03 mstpd 0 default yes no yes n/a 56m ago Add 24m ago mlx-2700-03 mstpd 429 default yes yes yes ok 56m ago Add 37m ago mlx-2700-03 mstpd 429 default yes yes no ok 56m ago Add 37m ago mlx-2700-03 mstpd 429 default yes yes yes ok 56m ago Add 50m ago mlx-2700-03 mstpd 429 default yes yes no ok 56m ago Add 50m ago
|The regular expression for hostname does not work with the `netq NODE show agents command`||
The regular expression syntax between Python and the backend database are different, causing support for more complex regular expressions difficult at this time.
To work around this issue, use tab completion to get the full hostname and specify it directly.
The regular expressions that are usable are:
- '*': matches zero or more of any char, so leaf* matches leaf01, leaf02 ... - '?': matches exactly zero or one char - [0-9]: matches exactly one char that is a number between 0-9 - [a-zA-Z]: matches exactly one char that is a letter between a-z or A-Z - [-1-2]*: matches zero or more chars that are either -, 1 or 2
|NetQ does not support traditional mode bridges||Support for traditional mode bridges should come in a future release of NetQ.|
|You cannot pipe a `netq` command to another `netq` command||
Attempting to pipe a
[email protected]:~$ netq show ip routes |netq resolve ^CTraceback (most recent call last): File "/usr/bin/netq", line 316, in reply = rx_reply(sock) File "/usr/bin/netq", line 131, in rx_reply ready = select([sock], , , timeout) KeyboardInterrupt
|NetQ does not support high availability (HA) mode||HA support should be available in a future release of NetQ.|
|Issues resizing NetQ Service Console window||
The NetQ Service Console does not handle resizing of the browser window very well, which may cause text in the console to wrap.
To work around this issue, do not resize the browser window once you launch the console.
|No synchronization between users added via the service console and the telemetry server users||
Users created in the NetQ Service Console are not synchronized with the users for the NetQ Telemetry Server.
To work around this issue, you must manually sync the user IDs between the telemetry server and the service console.
|The NetQ CLI and NetQ Notifier are supported on x86 platforms only||At this time, the NetQ command line interface and NetQ Notifier can run on x86 switches and hosts only.|
|When node is rotten, various sensors commands fail with this error: "Value s_input(0) should be of type(s) <type 'float'> but is of type <type 'int'>"||
If a node is identified by NetQ as "rotten," the following sensors commands fail with this message: "Value s_input(0) should be of type(s) <type 'float'> but is of type <type 'int'>"
To work around this issue, you can view sensors for individual hosts that are not rotten. For example:
[email protected]:~$ netq <hostname> show sensors all
This issue will be fixed in a future release of Cumulus NetQ.