Thursday, 27 August 2020

Cisco DNA Spaces Indoor IoT Services with Wi-Fi 6 – Delivering Business Outcomes at Scale

Organizations today are facing unprecedented times, and the need to digitize physical spaces has never been more important.

To adapt to these new challenges, enterprises must shift toward a new, open and unified ecosystem that both (1) supports delivering outcomes at scale and (2) continues to provide the enterprise with control of their infrastructure and solution stack.

Cisco’s wireless infrastructure with Cisco DNA Spaces is a powerful framework to enable this new requirement. Wireless access points have evolved from being used for connectivity to being a sensor enabling location services –  and Cisco’s Wi-Fi 6 Certified Catalyst 9100 access points powered by Cisco’s Catalyst 9800 controllers can now serve as a powerful gateway for not just Wi-Fi devices but also BLE asset tags, beacons, and other IoT end devices.

With *Cisco DNA Spaces Indoor IoT Services, customers can take their wireless beyond connectivity, digitize their physical spaces, and gain insights on the behavior of people, and now things. Currently supporting at least 500 million mobile devices, processing over 1 trillion location updates, and live across over 1 million access points, Cisco DNA Spaces continues to scale into digitizing enterprises across various industries.

Enabling Multiple Use Cases through an Open, Unified Platform


Location services solutions today face major challenges with complexity and limited ability to scale. There is a fragmented market of proprietary solutions where new applications would require disparate hardware and software, limiting flexibility and reusability.

Vendor-specific apps and hardware mean that there are separate touchpoints for monitoring and support, resulting in disjointed support models. As customers discover more use cases and deploy more IoT devices, they run into management pains with vendor lock-in and limited scalability.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

To overcome this complexity and cost, we are excited to announce Cisco DNA Spaces *Indoor IoT Services – which provides an open and unified platform for ordering IoT devices, onboarding and configuring devices, and connecting to industry-specific applications to enable business outcomes.

This offering will help customers deploy their applications rapidly, at scale, and at a significantly lower total cost of ownership (TCO).  This empowers enterprises to deploy multiple use cases such as asset management, room finding, space utilization, environmental monitoring, employee safety, and more, all enabled through a single middleware layer.

With Cisco DNA Spaces Indoor IoT Services, customers can deploy a broader spectrum of end devices, all without having to deploy separate gateways.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

The IoT Device Marketplace features a broad spectrum of supported end devices ready for customers to order and deploy. Customers have a wide choice of specialized beacons, tags, wristbands, badges, sensors, and other devices that are ready to deploy.

They can choose these devices based on their required use case, technology, form factor, and price. The device vendors are validated and are integrated into Cisco DNA Spaces end-to-end support model.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

Customers can discover and order end devices through the IoT Device Marketplace.

Cisco DNA Spaces also has an ecosystem of partner applications that are easy to activate. The Cisco DNA Spaces App Center features industry specific partner applications that leverages the location data from Cisco DNA Spaces, delivered over the Firehose API, to drive business outcomes across healthcare, workspaces, retail, hospitality, education, and manufacturing.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

Discover vertical specific partner applications on the Cisco DNA Spaces App Center.

Wi-Fi 6 Access Points with Dynamic Gateways


Cisco’s Wi-Fi 6 certified Catalyst 9100 access points can now host the Cisco DNA Spaces Advanced Gateway, deployed through Indoor IoT Services. This gateway enables management of BLE beacons and asset tags. The access points also come standard with a BLE radio, allowing them to scan, detect telemetry, transmit, and receive location information from various BLE end devices.

This decouples devices & applications, meaning customers can enable multiple applications with a wide range of devices, without having to worry about vendor compatibilities. This also replaces the need for overlay networks, and customers won’t have to deploy separate gateways.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

End-to-End as a Service


Cisco DNA Spaces Indoor IoT Services is an end-to-end as a service offering that greatly simplifies the activation, configuration, monitoring and management of IoT end devices from different vendors. You can discover devices from your network, activate them, and group these devices by assets, use cases and types of devices.

Device management is made simple with the ability to apply policy-based configurations to the device groupings.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

Apply policies to device groups, based on use case or asset.

End-to-end monitoring & support capabilities are also being expanded to include the end devices, in addition to Cisco DNA Spaces location data, access points, and partner applications. Monitoring will now include device battery level, last heard, and firmware to ensure that your end devices are working optimally.

Cisco Networking, Cisco DNA Center, Wireless and Mobility, Cisco Exam Prep, Cisco Tutorial and Material

Monitor devices through the dashboard and get proactive alerts on which devices require attention.

Tuesday, 25 August 2020

Multi-Site Data Center Networking with Secure VXLAN EVPN and CloudSec

Transcending Data Center Physical Needs


Maslow’s Hierarchy of Needs illustrates that humans need to fulfill base physiological needs—food, water, warmth, rest—in order to pursue higher levels of growth. When it comes to data center and Data Center Networking (DCN), meeting the physical infrastructure needs are the condition on which the next higher-level capabilities—safety and security—are constructed.

Satisfying the physical needs of a data center can be achieved through the concepts of Disaster Avoidance (DA) and Disaster Recovery (DR).

◉ Disaster Avoidance (DA) can be built on a redundant Data Center configuration, where each data center is its own Network Fault Domain, also called an Availability Zone (AZ).

◉ Building redundancy between multiple Availability Zones creates a Region.

◉ Building redundant data centers across multiple Regions provides a foundation for Disaster Recovery (DR).

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

Availability Zones within a Region

Availability Zones (AZ) are made possible with a modern data center network fabric with VXLAN BGP EVPN. The interconnect technology, Multi-Site, is capable of securely extending data center operation within and between Regions. A Region can consist of connected and geographically dispersed on-premise data centers and the public cloud. If you are interested in more details about DA and DR concepts, watch the Cisco Live session recording “Multicloud Networking for ACI and NX-OS Enabled Data Center Fabrics“.

With the primary basic need for availability through the existence of DA and DR in regions achieved, we can investigate data center Safety needs as we climb the pyramid of Maslow’s hierarchy.

Safety and Security: The Second Essential Need


The data center is, of course, where your data and applications reside—email, databases, website, and critical business processes. With connectivity between Availability Zones and Regions in place, there is a threat of exposing data to threats once it moves outside the confines of the on-premise or colocation centers. That’s because data transfers between Availability Zones and Regions generally have to travel over public infrastructure. The need for such transfers is driven by the requirement to have highly-available applications that are supported by redundant data centers. As data leaves the confinement of the Data Center via an interconnect, safety measures must ensure the Confidentiality and Integrity of these transfers to reduce the exposure to threats. Let’s examine the protocols that make secure data center interconnects possible.

DC Interconnect Evolves from IPSec to MACSec to CloudSec


About a decade ago, MACSec or 802.1AE became the preferred method of addressing Confidentiality and Integrity for high speed Data Center Interconnects (DCI). It superseded IPSec because it was natively embedded into the data center switch silicon (CloudScale ASICs). This enabled encryption at line-rate with minimal added latency or increase in packet size overhead. While these advantages were an advancement over IPSec, MACSec’s shortcomings arise because it can only be deployed between two adjacent devices. When Dark Fiber or xWDM are available among data centers this is not a problem. But often such a fully-transparent and secure service is too costly or not available. In these cases, the choice was to revert back to the more resource-consuming IPSec approach.

The virtue of MACSec paired with the requirements of Confidentiality, Integrity, and Availability (CIA) results in CloudSec. In essence, CloudSec is MACSec-in-UDP using Transport Mode, similar to ESP-in-UDP in Transport Mode as described in RFC3948. In addition to the specifics of transporting MACSec encrypted data over IP networks, CloudSec also carries a UDP header for entropy as well as an encrypted payload for Network Virtualization use-cases.

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

CloudSec carries an encrypted payload for network virtualization.

Other less efficient attempts were made to achieve similar results using, for example, MACSec over VXLAN or VXLAN over IPSec. While secure, these approaches just stack encapsulations and incur higher resource consumption. CloudSec is an efficient and secure transport encapsulation for carrying VXLAN.

Secure VXLAN EVPN Multi-Site using CloudSec


VXLAN EVPN Multi-Site provides a scalable interconnectivity solution among Data Center Networks (DCN). CloudSec provides transport and encryption. The signaling and key exchange that Secure EVPN provides is the final piece needed for a complete solution.

Secure EVPN, as documented in the IETF draft “draft-sajassi-bess-secure-evpn” describes a method of leveraging the EVPN address-family of Multi-Protocol BGP (MP-BGP). Secure EVPN provides a similar level of privacy, integrity, and authentication as Internet Key Exchange version 2 (IKEv2). BGP provides the capability of a point-to-multipoint control-plane for signaling encryption keys and policy exchange between the Multi-Site Border Gateways (BGW), creating pair-wise Security Associations for the CloudSec encryption. While there are established methods for signaling the creation of Security Associations, as with IKE in IPSec, these methods are generally based on point-to-point signaling, requiring the operator to configure pair-wise associations.

A VXLAN EVPN Multi-Site environment creates the ability to have an any-to-any communication between Sites. This full-mesh communication pattern requires the pre-creation of the Security Associations for CloudSec encryption. Leveraging BGP and a point-to-multipoint signaling methods becomes more efficient given that the Security Associates stay pair-wise.

Secure VXLAN EVPN Multi-Site using CloudSec provides state-of-the art Data Center Interconnect (DCI) with Confidentiality, Integrity, and Availability (CIA). The solution builds on VXLAN EVPN Multi-Site, which has been available on Cisco Nexus 9000 with NX-OS for many years.

Secure VXLAN EVPN Multi-Site is designed to be used in existing Multi-Site deployments. Border Gateways (BGW) using CloudSec-capable hardware can provide the encrypted service to communicate among peers while continuing to provide the Multi-Site functionality without encryption to the non-CloudSec BGWs. As part of the Secure EVPN Multi-Site solution, the configurable policy enables enforcement of encryption with a “must secure” option, while a relaxed mode is present for backwards compatibility with non-encryption capable sites.

Secure VXLAN EVPN Multi-Site using CloudSec is available in the Cisco Nexus 9300-FX2 as per NX-OS 9.3(5). All other Multi-Site BGW-capable Cisco Nexus 9000s are able to interoperate when running Cisco NX-OS 9.3(5).

Configure, Manage, and Operate Multi-Sites with Cisco DCNM


Cisco Data Center Network Manager (DCNM), starting with version 11.4(1), supports the setup of Secure EVPN Multi-Site using CloudSec. The authentication and encryption policy can be set in DCNM’s Fabric Builder workflow so that the necessary configuration settings are applied to the BGWs that are part of a respective Multi-Site Domain (MSD). Since DCNM is backward compatible with non-CloudSec capable BGWs, they can be included with one click in DCNM’s web-based management console. Enabling Secure EVPN Multi-Site with CloudSec is just a couple of clicks away.

Monday, 24 August 2020

Simplify IoT Edge-to-Multi-Cloud Data Flow with Cisco Edge Intelligence

DevNet is always looking for ways to help you do business smarter. And with our new IoT Edge Intelligence tools, you can now get your data directly from the network edge to the cloud, or from your own data center. Read on to learn how.

Connect assets at the edge to multi-cloud application destinations


Cisco recently made its brand new IoT data orchestration software – Edge Intelligence – publicly available. Edge Intelligence (EI) connects assets at the edge to multi-cloud application destinations securely, reliably and consistently.

The software integrates nicely with Cisco’s industrial networking and compute devices, which means that it already runs on some IOx capable devices (IR829, IR809, IC3000 and more to come very soon!). But today, you can get EI as a SaaS, where the user can manage assets, data policies, and data destinations via a centralized UI that enables remote deployment at scale.

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification

Here at DevNet, we wanted to make EI fun and easy. So, you can now test, learn, and get hands-on with EI with our new Learning Lab and DevNet Sandbox:

How it works


Edge Intelligence is built on 4 pillars:

1. Data Extraction: You can automatically ingest data from any edge sensor using built in industry standard connectors residing on Cisco Network equipment. Supported sensor protocols include OPC-UA, Modbus (TCP-IP and Serial-RTU) and MQTT

2. Data Transformation: You can create intelligent, business ready tasks using policies to filter, compress, or analyze data using real-time computing. Edge Intelligence supports creating these data logic scripts using industry standard IDE tools (e.g. Microsoft VSCode)

3. Data Governance: You can create a central point of control with the authority and security to determine who has access and where that data may be accessed. Edge Intelligence allows for policy control at device and attribute level on raw or transformed data.

4. Data Delivery: You can choose and deliver which data is sent to which analytics destinations with seamless integration with cloud providers, including Azure IoT Hub and standard MQTT based destinations, like Quantela, Software AG.

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification


Here’s an easy way to find more about how Edge Intelligence works. In my August 26th webinar we will show you how you can create your asset types, asset inventory, and data policies within just a few minutes. And, send data from the edge to your MQTT broker or preferred cloud hosting service. We will also showcase creating data logic scripts for data transformation using the industry-standard IDE tool Visual Studio Code.

Source: cisco.com

Saturday, 22 August 2020

Top 10 Challenges Solved by SAN Analytics

Delivering high levels of performance for enterprise storage environments is a key objective for CIOs and CTOs, often measured in millions of Input/Output Operations Per Second (IOPS) and microsecond response times.

Production storage environments are extremely complicated, however, so that optimizing performance is akin to solving a multidimensional equation containing multiple variables that constantly interact with each other. Without visibility into those interactions, you’re left with best efforts for optimization. The likely consequence is neither an ability to extract maximum value from the storage investment nor the delivery of maximum performance to the business.

The solution? SAN analytics. IBM c-type SAN Analytics, is the industry’s first and only integrated-by-design architecture, provides deep visibility into SCSI and NVMe traffic at scale. Below, I’ve listed top 10 challenges that can be solved with IBM c-type SAN Analytics.

#1: Find the slowest storage and host ports in the entire network fabric.


IBM, Data Center, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Materials

Proactively identify storage and host devices (or ports) causing bottlenecks and affecting application performance. Storage admins often look for ports in the path of slow IO transactions, defined as longer IO or Exchange Completion Time (ECT), which is the time to complete read or write transactions.

#2: Identify the busiest storage and host ports in the entire network fabric.


You can monitor the busy devices and proactively plan capacity expansion to address the high-usage ports before they affect application performance. Note, knowing a busy device solves a different problem than knowing a slow device.

#3: Discover if poor application performance is due to storage access issues.


IBM, Data Center, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Materials

If ECT increases, slow storage access may be the cause for application performance degradation. If ECT does not change, you can safely rule out storage access issues and focus your troubleshooting on other infrastructure components.

#4: Determine the cause of storage access issues: storage array, SAN, or host.


If you determine that application performance issues are due to slow storage access, IBM c-type SAN Analytics can pinpoint where this slowdown occurs: within the storage array, in the host, or due to congestion (or slow drain) within the SAN.

#5: Verify multipath (MPIO).


To help optimize storage usage and avoid unplanned downtime, end-to-end paths between a host and storage are proactively monitored to prevent potential multipath issues. IBM c-type SAN Analytics can help you detect if all the paths are not active, or if their utilization is not uniform.

#6: Establish higher levels of visibility between Cisco® UCS Servers and vHBA traffic.


IBM, Data Center, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Materials

Expedite issue remediation and improve SLAs by getting end-to-end visibility between blade servers, SAN, and storage LUNs. IBM c-type SAN Analytics provides visibility into the vHBA traffic of the Cisco UCS servers by inspecting the frame headers that carry FCID of the server vHBA. Finally, DCNM SAN Insights correlates the initiator FCID to the WWPN of the server vHBA and the host enclosure.

#7: Stop the guesswork by implementing a data-driven storage vMotion.


Use the throughput, IOPS, and other information from IBM c-type SAN Analytics to optimize an application’s underlying infrastructure and make informed, data-driven decisions on moving storage-hungry VMs to lesser-used physical hosts or paths. This helps improve SLAs while reducing overall infrastructure costs.

#8: Verify and optimize the usage of storage array ports.


Confirm uniform usage and obtain detailed metrics at a LUN level to help make informed corrective actions. For example, if one port of a storage array is 70% utilized, while another port is only 30% utilized, you can move the LUN association to better balance the load.

#9: Enable change management and verification.


Hardware and software changes are ongoing in data centers (e.g., replacing faulty SFPs and cables, upgrading HBAs, performing software upgrades, and applying patches). Proper verification of changes can be challenging, due to a lack of end-to-end visibility. End-to-end monitoring via DCNM SAN Insights, run both before and after a change, can be used to prevent unplanned downtime, increasing customer satisfaction.

#10: Obtain automatic baseline and deviations.


IBM, Data Center, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Materials

Use advanced analytics, end-to-end correlations, and long-term trending to help make operations more proactive and productive. One of the biggest challenges facing storage admins is understanding the difference between good and good enough. As an example, just knowing the absolute value of ECT is of little help. Instead, you can use DCNM SAN Insights to learn the ECT of an IO flow, establish baselines, and calculate deviations automatically from the baseline.

Friday, 21 August 2020

T-Mobile’s 5G Hype is Real. And Cisco is at the Heart of It

Cisco Certification, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials, Cisco 5G

Seems like we see news about 5G rollouts every week, with networks being “lit up” left and right. But on August 4, there was a really big announcement from T-Mobile about their launch of the world’s first nationwide standalone (SA) 5G network. So, why all the hype with this one?

Simple. T-Mobile’s 5G SA is pure 5G throughout the network, providing the high bandwidth, blazing speeds, and low latency that is the transformational promise of 5G. And the numbers speak for themselves. This new network expands its 5G coverage by nearly 30%, covering almost 250 million people in more than 7,500 cities and towns across 1.3 million square miles. T-Mobile’s deployment of 5G SA is truly orders of magnitude larger than any other. According to Ookla’s latest report, T-Mobile has the largest 5G footprint in the U.S., with 14 times more 5G sites than AT&T and 140 times more than Verizon. Those are some pretty impressive statistics. 5G SA is the future and Cisco is proud to partner with T-Mobile to deliver this next-gen connectivity to the entire nation.

What’s the difference between 5G Standalone and 5G Non-Standalone?


Any consumer considering their next mobile purchase or upgrade options should be asking this question. And, so should businesses looking for a mobile operator to partner with on their own 5G digital transformation. Simply put, with 5G Standalone, the whole network is 5G – the radio, the core, all of the speeds and benefits. 5G Non-Standalone (NSA) uses 5G radio over the existing 4G network. And while 5G NSA can provide improved speeds, it’s generally accepted that 5G SA is where the real transformation happens.

It’s not just a big deal for T-Mobile. For Cisco, being one of the key partners in this new network is huge. We’re proud to be supplying the critical core cloud-native functions that differentiate between a 5G bridge (NSA) and pure 5G (SA). Let’s take a closer look at the technology Cisco is providing in the world’s first nationwide 5G SA network.

•  User Plane Function (UPF): Positioned between the radio network and the core, this is arguably the most strategic control point (in-line service control) in the 5G SA network. UPF is responsible for packet routing and forwarding, packet inspection, QoS handling, and external PDU session for interconnecting Data Network (DN) in the 5G architecture. UPF also opens the GTP encapsulation exposing the IP packets so that your mobile traffic can be properly routed and your quality expectations (QoS) understood and met. UPF is essential in MEC and peering in the mobile network.

•  Session Management Function (SMF): This function provides the mobile core gateways. All mobile data sessions – like video calls and streaming – are managed via the SMF. If the mobile core is the heart of the 5G network, then the SMF is the heart of the mobile core. Both the 5G UPF and SMF are deployed as an evolution of the control/user plane separation (CUPS) capability that Cisco has made available to our customers since 2016.

•  Policy Control Function (PCF): As the name suggests, PCF is the network policy control point. This function supports the unified policy framework, governing network behavior. It provides policy rules to control and enforce plane functions.

Why Cisco for 5G SA?


The list is long…

•  Cisco is a leading technology driver for 5G. We’ve committed $5 billion in 5G funding to help service providers evolve to this next generation of mobility.

•  We’re active in all of the key mobile standards development and recommendations.

•  We introduced the first cloud-to-client software-defined 5G architecture.

•  We’re opening the last proprietary segment of the mobile network with Open vRAN.

•  Our 5G mobility products are cloud-native and designed to monetize 5G service, reduce costs, and mitigate risks.

•  Our mobile core products leverage Vector Packet Processing (VPP) technology, which delivers the fastest processing of traffic and network functions without requiring 3rd party plug-ins or add-ons.

Cisco Certification, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials, Cisco 5G
Another key reason to choose Cisco is that we continuously improve our software to increase performance and drive value. This means we can move mobile traffic between client and services as efficiently as possible. Whether it’s video conferencing (like WebEx), transmitting vehicular analytics, video streaming, or voice, it’s all data plane traffic. This is the traffic that Cisco’s 5G cloud-native packet core will provision and deliver across the T-Mobile 5G SA network. Our focus here ensures that we build best-in-class products and solutions that deliver on efficiency, customer support, and scale.

Building a 5G SA network is a major undertaking and investment. Service providers can feel confident that Cisco’s cloud-native 5G architecture drives the greatest value from the major investment that operators must make in spectrum and radio. With Cisco’s 5G architecture, the network is defined by the applications and services, not by the access technology.

Cisco solutions are designed to meet your customers’ needs and create new revenue, and we work hand in hand with you and your customers to make sure you get the greatest return on your investment. At the end of the day, that’s really the ultimate outcome – maximizing return, profitability, and value from your 5G investment. We join T-Mobile in celebrating the world’s first nationwide SA 5G network and look forward to seeing the impact it’s sure to make on millions of people.

Thursday, 20 August 2020

Network Automation and the Ingenuity of Data Models

Cisco Tutorial and Materials, Cisco Learning, Cisco Exam Prep, Cisco Certification

Network automation has evolved on Cisco switches through various features and protocols over the years. Network architects typically take a multi-pronged approach to network automation which includes aspects of network provisioning and configuration that can be automated using scripts and tools. In addition, telemetry data and operational data from devices can be used to further automate tasks and close the loop on intent-based networking. Having a suite of network automation capabilities in the enterprise is critical for innovation and continues to be a powerful investment going forward.

“We will also accelerate our investments in the following areas: cloud security; cloud collaboration; key enhancements for education, healthcare, and other industries; increased automation in the enterprise; the future of work; and application insights and analytics.”

The move to network automation, quite like the move from manual transmission to automatic transmission in automobiles, can be met with strong preferences for one way versus the other! While there are several applications for which we may prefer to use manual CLI methods, it is important to understand the value and capabilities of network automation on our switches. Many modern automation tools also use CLI on a bash shell for scripting and execution, with well-defined templates being integrated into a GUI. Once we get a handle on how to automate functions for network deployment in an efficient, predictable and consistent way, it becomes easy to apply these methods where they are most relevant. With that, let’s shift into drive and get started!

Network Automation with Open NX-OS


The introduction of model-based network programmability on our switches in recent years can be considered trailblazing in how we automate network functions. The paradigm shift to data models on our switches makes network automation a reality with the use of managed objects and their associated constructs using different toolchains. Cisco NX-OS now has new capabilities with OpenConfig and gRPC Network Management Interface (gNMI) support to provide an open and model-driven facility to automate data center networks. Open NX-OS also offers different methods of API abstractions that allow us to automate key network functions with simple Python scripts.

In this article, we are going to cover two new frameworks for network automation using Open NX-OS methods based on Python 3.0:

1. PyDME: provides Python abstraction using Cisco DME and REST API methods
2. cisco-gnmi: wraps gNMI implementation using OpenConfig and gNMI/gRPC methods

We will illustrate the use of these tools with a simple example using the IEEE protocol LLDP (Link Layer Discovery Protocol). We would like to detect Linux hosts connected to a switch and automatically configure the associated ports using a pre-defined template. In this scenario, we parse through the LLDP neighbors of a switch. If we find a Linux host attached to an ethernet port, we configure that port as a trunk. With the use of data models, we illustrate how this can be done with just a few lines of code, using the object structure to extract and manipulate the very specific attributes of configuration as desired. This basic example can be extrapolated to more complex deployment scenarios.

Both PyDME and cisco-gnmi consist of libraries that are installed off-box. They completely abstract the methods used to access NX-OS switches and retrieve data or apply configuration. While PyDME accesses the NX-OS managed objects through the Data Management Engine (DME) using REST API methods, the cisco-gnmi tool leverages OpenConfig and device YANG data models with gNMI/gRPC methods. It is not really fair to compare the two methods. However, I will be highlighting how each of them can be used to solve the same task. I’ll provide pointers to the actual code that implements this example and illustrate the value of the two different toolchains in this article. All code has been done using Python 3.0.

Note: These are not officially supported Cisco products, but can be used to streamline implementation with released Cisco NX-OS features such as REST API and gNMI.

Method 1: Open NX-OS Automation with PyDME

PyDME is tool that provides a Python abstraction over REST API using Cisco DME to access managed objects. It provides API constructs to access the switch and configure it. The library is available at the repository linked here.

To use it, install the library onto a host that has connectivity to your switches. Then setup a simple script in Python to perform the required task. The script runs on the host and uses the PyDME library to configure your switches and retrieve configuration and operational data from them using REST methods.

Switch Configuration

The example we use includes a Nexus 9000 with NX-OS Release 9.3(5). We have “feature nxapi” enabled on the switch. For our specific example, we also have “feature lldp” enabled, but this configuration can be included within the automation script.

Installation on the Host

PyDME can be installed on your host using a Docker install or a pip install (pip3 where appropriate). The required packages are installed, and you can optionally also install the associated utils to retrieve information about the managed object tree.

Code Constructs

To code your script, it’s helpful to understand the different constructs that PyDME uses.  These API constructs achieve tasks that would otherwise be done via REST API.

Node: To begin with, we define a node, which abstracts the switch we are about to access. The node is specified using the REST URL for the IP address of the switch. There are two associated methods used to access the switch: Login and LoginRefresh. Login is used to access the switch using a POST() method. LoginRefresh uses a GET() operation to prevent the session from timing out. Once we establish access to the switch using the username and password, we can then begin to apply REST API calls.

my_switch = Node(host_url)
result = my_switch.methods.Login(username,password).POST()

Managed Objects: We instantiate DME managed objects locally from the node using the “mit” which represents the Managed Information Tree (MIT). PyDME requires a thorough understanding of DME and its data models. It is important to note that each time the “mit” property is invoked on a node, it generates a different Managed Information Tree which the PyDME script uses as a local cache. Once this done, we can use GET/POST/DELETE methods to retrieve, post and delete data respectively using their corresponding REST operations.

Here is a snippet of the code for a GET and a POST operation. The GET() method illustrated below queries the lldpAdjEp model and all its children, which includes information about the switch’s LLDP neighbors.

mit = my_switch.mit
lldp_neighbors = mit.GET(**options.subtreeClass('lldpAdjEp'))

If we stop here and look at the structure of data in lldp_neighbors, this is what it looks like:

"lldpAdjEp": {
"attributes": {
"capability": "bridge,router,station,wlan",
"chassisIdT": "mac",
"chassisIdV": "0050.56b4.4bf0",
"childAction": "",
"dn": "sys/lldp/inst/if-[eth1/3]/adj-1",
<--- snip --->
"sysDesc": "Ubuntu 18.04.4 LTS Linux 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64",
"sysName": "dirao-lnx1.aci.local"
}
}

We will iterate through all the lldpAdjEp instances and extract the interface ID from the “dn” attribute when the sysDesc attribute matches the string ‘Linux’.

If you’re wondering how to know which model to use, the DME model reference is a good resource. In parallel, the PyDME repository includes a util called buildMoTree.py that allows us to find the model and attributes we desire.

ciscoprep@Ubuntu-host:~/pydme/utils$ python3 buildMoTree.py ../archive/dme-9.3.5-meta.json lldpAdjEp | grep -A 3 properties
properties of lldpAdjEp:
['capability', 'chassisIdT', 'chassisIdV', 'childAction', 'dn', 'enCap', 'id', 'mgmtId', 'mgmtIp', 'mgmtPortMac', 'modTs', 'monPolDn', 'name', 'persistentOnReload', 'portDesc', 'portIdT', 'portIdV', 'portVlan', 'rn', 'stQual', 'status', 'sysDesc', 'sysName', 'ttl']

Once we detect a Linux neighbor, we will set the configuration of the associated port as a trunk using POST(). To do this, we will need to re-initialize “mit” since we access “InterfaceEntity”, which is in a different branch of the Managed Information Tree. Here, we can modify the required attributes as part of the POST() method.

if_status = mit.topSystem().interfaceEntity().l1PhysIf(lldp_if)
if_status.mode = 'trunk'
if_status.trunkVlans = '1 - 512'
result_config = if_status.POST()

Now, we’re going to execute the script on our host and then verify the switch configuration for interface eth1/3.

ciscoprep@Ubuntu-host:~$ python3 pyDME-neighbor-trunk.py
We will set eth1/3
Ubuntu 18.04.4 LTS Linux 4.15.0-101-generic #102-Ubuntu SMP Mon May 11 10:07:26 UTC 2020 x86_64
eth1/3 has been configured as a trunk

Method 2: Open NX-OS Automation with gNMI and OpenConfig

My previous blog post referenced the gNMI support we have on Nexus 9000 switches with OpenConfig and YANG. We discussed the different gRPC operations supported with gNMI, namely, CapabilitiesRequest, GetRequest, SetRequest and SubscribeRequest. We then illustrated the process of using gNMI Subscribe to subscribe to telemetry data on the switch and stream it to an open source collector, Telegraf. In this article, we will describe a tool that abstracts Capabilities, Get, Set and Subscribe using gNMI and OpenConfig. We are going to illustrate the same example above with LLDP and use this tool to “Get” and “Set” our data using gNMI.

The repository for this tool can be found in GitHub here. The library “cisco-gnmi-python” wraps the gNMI implementation to facilitate ease of use of Python programs with different Cisco implementations (IOS-XE, IOS-XR and NX-OS). It also includes a CLI form of the tool which can be used to implement gNMI functionality without the use of a Python script.

We will briefly go over the two methods here and leave you with a reference to the complete code.

Switch Configuration

In order to set up the switch for gNMI, we will need to follow the same steps as we did in the gNMI Subscribe example (refer to Steps 1 and 3). This includes installing the RPM packages for OpenConfig and configuring gRPC on the switch. Once we have the gRPC certificates installed on the switch, we also need to copy the certificate file (public key) onto our host where we will be installing the Cisco gNMI tool. In addition, since our example is based on LLDP, we enable “feature lldp”.

Installation on the host

Installation of the library on your host can be done with a pip or pip3 install like we did for PyDME. Once we have finished installing all the packages, we will be able to use cisco-gnmi both as a library with Python scripts as well as with the CLI tool.

pip3 install cisco-gnmi

gNMI CLI

Here is an example of how we can retrieve gNMI Capabilities using cisco-gnmi. This gives us information from the switch about gNMI versions it uses and data models and encodings it supports. The -ssl_target_override parameter overrides the hostname of our host. We also specify the credentials to access the switch including the certificate and the gRPC port number that is configured on the switch.

ciscoprep@Ubuntu-host:~$ cisco-gnmi capabilities -os NX-OS -root_certificates ./gnmi.pem -ssl_target_override dirao 172.25.74.84:50051
Username: admin
Password:
<--- snip --->
supported_models {
name: "openconfig-lldp"
organization: "OpenConfig working group"
version: "0.2.1"
}
<--- snip --->

The following CLI can be used to retrieve data from the switch using gNMI Get, for example.

ciscoprep@Ubuntu-host:~$ cisco-gnmi get -encoding JSON -data_type STATE -os NX-OS -root_certificates ./gnmi.pem -ssl_target_override ciscoprep -xpath "/interfaces/interface[name='eth1/1']" 172.25.74.84:50051

It specifies the path, type and encoding for which data is requested. As we can see with the xpath definition, we are using OpenConfig as the underlying data model to retrieve the state of an interface (the tool also supports device YANG as the data model which is specific to NX-OS). The type of information being retrieved could be config, state or all in NX-OS. In this example, we specify JSON as the encoding since it’s what is currently supported on NX-OS.

Use the example below to try a gNMI Set operation to update, replace or delete configuration on switches.

ciscoprep@Ubuntu-host:~$ cisco-gnmi set 172.25.74.84:50051 -os NX-OS -root_certificates ./gnmi.pem -ssl_target_override ciscoprep -update_json_config ./int_trunk.json

We specify our configuration to be applied in a JSON file called int_trunk. The SetRequest operation typically includes a path similar to the above example. It also includes a value which is the data to be applied on the switch.

Similarly, the cisco-gnmi CLI can also be used to do the gNMI SubscribeRequest operation.

Python script to automate configuration using LLDP and gNMI

Now that we’ve established that the cisco-gnmi library is installed and we are able to perform the different gRPC operations on our switch, we are already halfway there! This shows us that our switch configuration, OpenConfig RPM packages, and certificates are all working correctly. It also shows us that we can do a gNMI Capabilities, Get and Set successfully.

With all of this established, how do we write a Python script to do our original task? The script will have to first apply a gNMI Capabilities method to check if the OpenConfig model for LLDP is supported on the switch. Next, we will have to do a gNMI Get to retrieve the state of LLDP neighbors on the switch. With this information, we can extract the interfaces where a Linux host is detected and do a gNMI Set to set our interfaces with “switchport mode trunk”. And that’s it! We now have a working Python 3.0 script which is able to automate our task, with a completely open model using gNMI!

The complete code can be found on GitHub, but I would like to point out a few things here.

The code defines a new class called ConfigFunctions(). Within that class, the following functions are defined: Init, lldp_capability, get_lldp_ifs and set_trunk_host to perform our required operations. We also have a helper function called get_gnmi_json_val to convert our data from protobuf to JSON and decode the base64 string to UTF-8, so we can easily parse through it.

Now with all of this in place, let’s fire this script up!

ciscoprep@Ubuntu-host:~$ python3 lldp-gnmi-getpython.py
Overriding SSL option from certificate could increase MITM susceptibility!
openconfig-lldp model supported on device
Setting up Interface: eth1/3
response {
path {
origin: "openconfig"
elem {
name: "openconfig-interfaces:interfaces"
}
}
op: UPDATE
}
timestamp: 1597283880756493972
ciscoprep@Ubuntu-host:~$

As you will see in the code, the xpath which is used to specify the OpenConfig model is invoked in the set_trunk_host function.

xpath = "openconfig-lldp:lldp/interfaces/interface[name='"+interface+"']/neighbors/neighbor/state/system-description"

There are a couple of important points to note when using the OpenConfig model:

First, we currently do not have a method to convert an interface from Layer 3 to Layer 2 mode using OpenConfig. The example script assumes that the interface being configured is in the default Layer 2 mode. However, if we’d like to add this configuration through gNMI Set, we can use the device YANG models.

Secondly, the OpenConfig model for LLDP that I used in my example did not include information about the local interface. Due to this, the example script iterates through the interfaces to note the interface in question when the Linux host is detected.

Tuesday, 18 August 2020

Cisco Launches SD-WAN Cloud Interconnect Ecosystem with Megaport

Enterprises are consuming more business-critical cloud applications, and most connect to the cloud over the Internet. However, the Internet offers only best-effort connectivity with inconsistent network quality, which can impact application performance significantly.

Enterprises can also choose direct cloud interconnects for their site-to-cloud connectivity. However these “mid-mile” interconnects require customers to plan for capacity and global reach upfront, which can lead to underutilization and spiraling cost.

Today we are announcing a collaboration with Megaport, which offers Software-Defined Cloud Interconnects (SDCI). It provides programmable cloud interconnects to bridge enterprise SD-WAN sites to clouds in minutes instead of weeks, with strong performance and high reliability.

Cisco Tutorials and Materials, Cisco Leaning, Cisco Exam Prep, Cisco Guides, Cisco SD-WAN
Cisco’s vManage will act as the overlay for software-defined cloud interconnects, providing ease of management and the capability to rapidly instantiate connections.

This collaboration will offer Cisco’s SD-WAN customers access to Megaport’s global reach. Megaport offers extensive connectivity choices, backed by service-level guarantees for assurance. It includes peering with location data centers, with a global footprint across 23 countries. Megaport connects to more than 200 cloud on-ramps, including leading SaaS services like Office365 and Salesforce, and to the six largest public cloud providers:  AWS, Azure, Google, Oracle, IBM and Alibaba. The Megaport ecosystem also connects to 200 network service providers, more than 700 data centers, and 360 IT service providers and aaS providers.

With this new collaboration, Cisco customers can leverage Cisco’s SD-WAN management platform, vManage, to software-define their cloud interconnects to multicloud and SaaS. With this integration, Cisco SD-WAN fabric will act as the overlay, and the Megaport Software Defined Network will act as the underlay.

This collaboration extends Cisco’s SD-WAN leadership, by offering an ecosystem platform for partners, of which Megaport is the first, to bridge Cisco SD-WAN fabric with the carrier-neutral and software-defined cloud interconnect fabrics.