Cumulus NetQ 1.4.0 Release Notes

Follow

Overview

These release notes support Cumulus NetQ 1.4.0, and describe currently available features and known issues.

Stay up to Date 

  • Please sign in and click Follow above so you can receive a notification when we update these release notes.
  • Subscribe to our product bulletin mailing list to receive important announcements and updates about issues that arise in our products.
  • Subscribe to our security announcement mailing list to receive alerts whenever we update our software for security issues.

{{table_of_contents}}

What's New in Cumulus NetQ 1.4.0

Cumulus NetQ 1.4.0 includes the following new features and improvements:

  • Added
    • support for monitoring up to 200 Cumulus Linux nodes
    • validation of symmetric VXLAN routes through CLI
    • validation of forward error correction (FEC) operation through NetQL
  • Updated
    • color cues for netq show services command to more easily view status of services at a glance
    • CLI syntax for creating NetQ Notifier filters to improve usability and operation
    • trace functionality to improve usability and operation

Early Access Support

NetQ 1.4.0 also includes these early access features:

  • NetQ Image and Provisioning Management: Manage the software images and provisioning scripts used by Cumulus Linux and NetQ.
  • Customize NetQ Commands: Codify automation scripts and extend NetQ with custom commands for use cases specific to your network.
  • NetQ Query Language: Search for even more NetQ data using the SQL-like NetQ Query Language (NetQL). Run your own custom analyses or simply extend NetQ functionality for your specific environment.
  • Collecting interface statistics: Collect statistics for network interfaces and display in third-party applications.

Licensing

Cumulus NetQ is licensed on a per-switch basis. For hosts, one license is required per rack. You should have received a license key from Cumulus Networks or an authorized reseller. 

Installing and Upgrading NetQ

To install or upgrade NetQ, read the deployment guide.

Documentation

You can read the technical documentation here.

Issues Fixed in Cumulus NetQ 1.4.0

The following is a list of issues fixed in Cumulus NetQ 1.4.0 from earlier versions of Cumulus NetQ.

Release Note ID Summary Description

RN-694 (CM-18381, CM-18449, CM-18852)
The `netq show vlan` command does not return the status of an SVI

NetQ correctly showed that an SVI is present on a switch, but the netq show vlan command did not return its status. This issue is fixed in Cumulus NetQ 1.4.0.


RN-855 (CM-20057)
The 'netq check evpn' command does not provide correct results

The netq check evpn command did not correctly validate the layer 3 EVPN configuration. The VNI membership for symmetric routing was not taking into account the type of VNI (layer2 versus layer 3).

This issue is fixed in Cumulus NetQ 1.4.0.


RN-858 (CM-20383)
Docker connectivity and adjacent options are not shown

NetQ does not show connectivity or adjacency information for a given Docker node in a Swarm cluster.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-859 (CM-20425)
The Telemetry Server GUI Portainer does not support user names with a dash (-)

If a username contains a dash, an error is seen when logging in through the Portainer GUI.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1118 (CM-22214)
Invalid VNI Consistency Check for LNV

The netq check lnv command indicated inconsistencies were present due to apparent non-uniform VNI configuration across VTEPs. In fact, VNIs are only added to VTEPs that require them and is not a configuration inconsistency. The command no longer identifies an inconsistency when the associated VNIs are different for each VTEP.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1119 (CM-21084, CM-21839)
NetQ Telemetry Server Running Out of Memory due to Excessive DB updates

Format errors in the netq.yml configuration file making it unreadable can cause the netq-stats-pushd daemon to crash, filling the database and causing NetQ Agents to be marked as rotten (stale from a communication perspective). Configuration file read failures are now captured and a warning message in issued in the netq-stats-pushd log indicating the configuration issue.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1120 (CM-21797)
Negative ASN shown in NetQ BGP data when ASN is larger than 2 billion

When an ASN was larger than two billion, it was displayed as a negative number in the output for BGP commands.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1121 (CM-21743, CM-21810, CM-21888)
Link or protocol flapping causes rapid DB growth

When links or routing protocols flap, a large number of changes (to routes and prefixes) are recorded in the database in a short period of time, causing the database to become full. A service on the Telemetry Server now monitors and trims these changes to avoid filling the database.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1122 (CM-21551, CM-21659)
`netq-stat-pushd` daemon traceback when older data is pushed into influxdb

The InfluxDB retains data for one week. When data older than one week is sent to the database from the interface statistics daemon (netq-stats-pushd), InfluxDB refuses the request which can cause the daemon to crash.

This issue is fixed in Cumulus NetQ 1.4.0.


RN-1123 (CM-21166)
NetQ cannot read HTTP/HTTPS proxy settings in `/etc/environment`

Support was not available for HTTP or HTTPS proxies in /etc/environment needed for the NetQ Notifier to enable integration with third-party notification applications, such as Slack. http_proxy and https_proxy variables were added. When configured, a Notifier log message is shown similar to this:

2018-06-08T22:09:20.641354+00:00 cumulus netq-notifier[14498]: INFO: netq-notifier: S1: Proxy used: {'http': 'http://root:<password>@192.168.50.30:3128/', 'https': 'https://root:<password>@192.168.50.30:3128/'}

This issue is fixed in Cumulus NetQ 1.4.0.

Known Issues in Cumulus NetQ 1.4.0

The following is a list of known issues in Cumulus NetQ 1.4.0 release.

Release Note ID Summary Description

RN-622 (CM-14421)
NetQ does not support traditional mode bridges

Traditional mode bridges are currently unsupported. To work around this issue, use a VLAN-aware bridge configuration.

This is a known issue that is currently being investigated.


RN-692 (CM-18623)
When running NetQ on a chassis, `netq show inventory` does not show ASIC and port information

When NetQ is running on a Facebook chassis, the netq show inventory brief command displays "N/A" in the ASIC and Ports columns.

cumulus@10-0-2-32:~$ netq show inventory brief
Matching inventory records:
Hostname        Switch    OS             CPU     ASIC    Ports
--------------  --------  -------------  ------  ------  -------
cel-bs02-fc1    FAB       Cumulus Linux  x86_64  N/A     N/A
cel-bs02-fc2    FAB       Cumulus Linux  x86_64  N/A     N/A
cel-bs02-lc101  LC        Cumulus Linux  x86_64  N/A     N/A
cel-bs02-lc102  LC        Cumulus Linux  x86_64  N/A     N/A
cel-bs02-lc201  LC        Cumulus Linux  x86_64  N/A     N/A
hosts-11        N/A       Ubuntu         x86_64  N/A     N/A
hosts-12        N/A       CentOS Linux   x86_64  N/A     N/A
hosts-13        N/A       Ubuntu         x86_64  N/A     N/A
hosts-14        N/A       CentOS Linux   x86_64  N/A     N/A
hosts-21        N/A       Ubuntu         x86_64  N/A     N/A
hosts-22        N/A       CentOS Linux   x86_64  N/A     N/A
hosts-23        N/A       Ubuntu         x86_64  N/A     N/A
hosts-24        N/A       CentOS Linux   x86_64  N/A     N/A

This is a known issue that is currently being investigated.


RN-693 (CM-18794)
When NetQ detects an auto-edge interface, traversing stops, resulting in an incomplete STP topology tree

Running netq show stp topology displays an incomplete STP topology tree. When NetQ detects an interface as auto-edge, it stops traversing before verifying with the other side whether or not it is an edge. In the example below, the topology tree for spine2 to edge1 and edge2 is incomplete:

cumulus@switch:~$ netq spine1 show stp  topology
Root(spine1) -- spine1:sw_clag200 -- leaf1:EdgeIntf(sng_hst2) -- hsleaf11
                                 -- leaf1:EdgeIntf(dual_host2) -- hdleaf2
                                 -- leaf1:EdgeIntf(dual_host1) -- hdleaf1
                                 -- leaf1:ClagIsl(peer-bond1) -- leaf2
                                 -- leaf2:EdgeIntf(sng_hst2) -- hsleaf21
                                 -- leaf2:EdgeIntf(dual_host2) -- hdleaf2
                                 -- leaf2:EdgeIntf(dual_host1) -- hdleaf1
                                 -- leaf2:ClagIsl(peer-bond1) -- leaf1
            -- spine1:ClagIsl(peer-bond1) -- spine2
            -- spine1:sw_clag300 -- edge2:EdgeIntf(sng_hst2) -- hsedge21
                                 -- edge2:EdgeIntf(dual_host2) -- hdedge2
                                 -- edge2:EdgeIntf(dual_host1) -- hdedge1
                                 -- edge2:ClagIsl(peer-bond1) -- edge1
                                 -- edge1:EdgeIntf(sng_hst2) -- hsedge11
                                 -- edge1:EdgeIntf(dual_host2) -- hdedge2
                                 -- edge1:EdgeIntf(dual_host1) -- hdedge1
                                 -- edge1:ClagIsl(peer-bond1) -- edge2
Root(spine2) -- spine2:sw_clag200 -- leaf2:EdgeIntf(sng_hst2) -- hsleaf21
                                 -- leaf2:EdgeIntf(dual_host2) -- hdleaf2
                                 -- leaf2:EdgeIntf(dual_host1) -- hdleaf1
                                 -- leaf2:ClagIsl(peer-bond1) -- leaf1
                                 -- leaf1:EdgeIntf(sng_hst2) -- hsleaf11
                                 -- leaf1:EdgeIntf(dual_host2) -- hdleaf2
                                 -- leaf1:EdgeIntf(dual_host1) -- hdleaf1
                                 -- leaf1:ClagIsl(peer-bond1) -- leaf2
            -- spine2:ClagIsl(peer-bond1) -- spine1
            -- spine2:EdgeIntf(sw_clag300) -- edge1
            -- spine2:EdgeIntf(sw_clag300) -- edge2

This is a known issue that is currently being investigated.


RN-856 (CM-18940)
The NTP agent state does not sync when management VRF is enabled

The NTP Agent State column does not change to Yes after you move the NetQ Agent from the default VRF to the management VRF.

This is a known issue that is currently being investigated.


RN-857 (CM-18325)
No support for custom sentinel ports in HA mode

There is currently no support for sentinel ports in a high availability configuration; only the default port 26379 is supported.

This is a known issue that is currently being investigated.


RN-1124 (CM-22562)
The `netq show services [active|monitored]' command is not parsing correctly

When you run netq show services active or netq show services monitored it shows no active or monitored services even when there are active and monitored services.

This is a known issue that is currently being investigated.


RN-1126 (CM-22567)
Layer 2 trace command requires VLAN keyword-value pair to operate

The syntax for layer 2 trace command has VLAN keyword-value pair as optional, but is required to run the command.

netq trace <mac> [vlan <1-4096>] from (<src-hostname>|<ip-src>) [vrf <vrf>] [around <text-time>] [bidir] [json|detail|pretty] [debug]

Simply provide the VLAN information when performing a layer 2 trace.

This is a known issue that is currently being investigated.

Have more questions? Submit a request

Comments

Powered by Zendesk