Showing posts with label Cisco Nexus 9000. Show all posts
Showing posts with label Cisco Nexus 9000. Show all posts

Saturday, 25 December 2021

How Cybersecurity Leads to Improved Sustainability

After managing the sudden switch to remote work in 2020, organizations are making a more permanent transition into the flexible hybrid workforce. The Federal Bureau of Investigation (FBI) found that cybersecurity attacks rose by 3-4 times from the transition to remote work in 2020. In addition, experts predict that ransomware will cost the world up to $20 billion in 2021 and is expected to be a greater concern with the hybrid work model. As a result, you’ll need to rapidly scale your security to account for the massive influx of remote and hybrid workers while simplifying and unifying your IT systems.

While implementing security controls is increasingly important, this also means more hardware appliances and virtual instances to secure different parts of the infrastructure. All this extra equipment and instances means more power consumption and heat dissipation, leading to adverse impacts on the environment. We’re taking steps to address this situation. There are a couple of ways we’re approaching this. Cisco products have security features which are built into our switches to prevent the need for separate security appliances.

Innovative methods to detect malware within encrypted layers

As an example, let’s look at the scenario where a traditional method of securing the deployment is used for decryption and identification of malware. As shown in Figure 1, you would first need to decrypt the traffic, then apply analysis (inspection / anti-malware), and finally encrypt the traffic again. The resulting power consumption is shown in Table 1.

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Figure 1. Traditional deployment using Secure Sockets Layer (SSL) inspection

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Table 1. Power consumption in a traditional deployment

As displayed in Table 1, the total power consumption for all the devices is close to 9500W. In the sustainable method we offer, the Cisco Secure Network Analytics (Cisco Stealthwatch) components like Stealthwatch Management Console (SMC) and Flow Collector (FC) are virtualized, which can be deployed on the existing X86 servers without needing the additional devices as shown in Figure 2.

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Figure 2. Innovative and sustainable option using Cisco Secure Network Analytics (Stealthwatch)

In this scenario, Stealthwatch’s patented technology allows analysis of encrypted traffic without decryption. The ETA module in the catalyst switch provides Stealthwatch with the extra information for the analysis of the encrypted traffic without decryption.

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Table 2. Power consumption using Cisco Secure Network Analytics with Catalyst switches

As the Stealthwatch components are virtual, they can be deployed in an existing X86 server, and the power consumption is minimal as compared to the dedicated appliances.

Another way Cisco caters to sustainable cybersecurity is by ensuring that the functionalities such as load balancing, packet broker functions, switching, and routing are all included in a single appliance.

Tables 3-4 highlight the difference between the traditional method and innovative new method for total power consumed for identifying malware in encrypted traffic:

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Table 3. Traditional method power consumption

All the functionalities listed in Table 3 are now available in a single switch such as the Nexus NX 9300, which has the following power consumption:

SP360: Service Provider, Cisco Nexus 9000, Service Provider, Cisco Stealthwatch, Cybersecurity, Cisco Career, Cisco Exam Prep, Cisco Preparation
Table 4. Power consumption using Cisco Nexus

This shows that there are alternate methods to detect malware within encrypted layers which are more sustainable, efficient, and less expensive compared to traditional deployments.

Source: cisco.com

Tuesday, 2 November 2021

Simplify Hybrid Cloud Networking with Cisco Nexus Dashboard

Simplicity is the ultimate sophistication.  – Leonardo da Vinci

For IT, complexity is the antithesis of agility. However, with the increased demand for remote healthcare, distance learning, hybrid work, and surging dependence on online retail, there is an urgent shift to hybrid and cloud-native applications to keep up with the necessary digital transformations—thus adding complexity.

Hybrid cloud is now the reality for nearly all enterprises. Workloads are distributed across on-premises, edge, and public clouds. However, seamless operations of hybrid cloud applications across distributed environments needs to address stringent location-dependent requirements such as low latency, regional data compliance, and resiliency. Adding to the complexity is the additional need for governance—compliance, security, and availability—to which networking teams need to adhere. The need for visibility and insights closer to where data is created and processed—on-premises, cloud, and at the edge—is also critical.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification
Hybrid Cloud Networking Challenges
 
How does an operations’ team deal with this complex new hybrid cloud networking reality? They need three operational capabilities:

◉ Obtain a unified correlated and comprehensive view of the infrastructure.
◉ Gain the ability to respond proactively across people, process, and technology silos.
◉ Deliver speed of business, without increasing operating costs and tool-sprawl.

It is a multidimensional challenge for IT to keep applications and networks in sync. With the ever-increasing scope of the roles of NetOps and DevOps, an automation toolset is needed to accelerate hybrid cloud operations and securely manage the expansion from on-prem to cloud.

Flexible Hybrid Cloud Networking with Cisco Nexus Dashboard


Cisco Nexus Dashboard 2.1, the newest of Cisco’s cloud networking platform innovations, will help IT simplify transition to hybrid applications using a single agile platform. Besides bridging the gap in tooling, one of the major capabilities of the Nexus Dashboard is enabling a flexible operational model for different personas—NetOps, DevOps, SecOps, and CloudOps—across a plethora of use cases.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification
Cisco Nexus Dashboard: One Scalable, Extensible Platform Across Global Hybrid Infrastructure

Conventionally, operators relied on disjointed tools for specific functions across connectivity, visibility, and security. With multiple capabilities being natively integrated into the Cisco Nexus Dashboard, as well as 3rd party services, Cisco is simplifying the overall experience for IT.

Operators can now manage their hybrid cloud network infrastructure with ease from a single automation and operations platform, Cisco Nexus Dashboard—whether they are running Cisco Application Centric Infrastructure (ACI) or Cisco Nexus Dashboard Fabric Controller (NDFC) in their hybrid cloud infrastructures.

New innovations with Nexus Dashboard 2.1 include availability on AWS and Azure marketplaces; Nexus Dashboard One View, which provides a single cohesive view of all the sites being managed and the services installed across Nexus Dashboard clusters; advanced endpoint analytics; scalable connectivity through Nexus Dashboard Orchestrator (NDO); Nexus Dashboard Insights (NDI); Nexus Dashboard Data Broker (NDDB) service; and many more capabilities. Let’s look at five capabilities of Cisco Nexus Dashboard 2.1 that are delighting customers.

1. Hybrid Cloud Connectivity at Scale with Nexus Dashboard Orchestrator

New hybrid cloud capabilities include support for Google Cloud—in addition to AWS and Azure integrations—and connectivity automation capabilities to enable new use cases, such as:

◉ External Connectivity: Cloud VPCs/VNet to external devices (branch router, SD-WAN edge, colocation routers, or on-prem routers)

◉ Hybrid Cloud Connectivity: Automate connectivity for GCP, AWS, and Azure clouds and on-premises ACI sites using BGP and IPSec

◉ Stitching connectivity: Cloud VPCs/VNET, On-Prem VRFs, including route management

Connectivity is established by BGP peering and IPSec tunnels connecting the cloud site’s Cloud Services Routers (CSR) or Google Cloud’s Native Cloud Router, to the external devices. Once connectivity is established, IT can enable route leak configurations to allow subnets from the external sites to establish connectivity with the cloud site’s VPCs/VNETs.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification

2. Change Management Workflow with Nexus Dashboard Orchestrator

In a modern enterprise IT team, there are typically multiple personas involved from design to deployment. The design team (Designer Persona) can create and edit the Nexus Dashboard Orchestrator templates and send them to the deployment team (Approver/Deployer Persona) for approval. The deployment team reviews and approve templates ahead of a change management window and queues the templates for deployment during the actual change management window.

Starting with the latest version, Nexus Dashboard Orchestrator 3.4(1) release, a structured persona-based change management workflow provides additional operational flexibility. Three personas for template management—Designer, Approver, and Deployer roles—are available. An admin can assume one of these roles or a combination of them.

◉ Designers: Create and edit template application policies and sends them to Approvers for review and approval.
◉ Approvers: Review the templates and either approves for deployment or rejects the proposed changes and sends it back to the Designer to update the template based on comments.
◉ Deployers: Deploys templates or initiates a rollback to previous version of template.

When Approvers review the templates, they have a GitHub-style “diff view” to clearly compare the before and after changes so they can easily review, approve, reject, and comment on the template differences.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification

Deployers have two additional new capabilities for effective change management operations:

◉ Configuration preview: Preview of the exact configuration—XML Post and graphical views—that will be deployed to the sites so the Deployer can decide to proceed or abort deployment commit.

◉ Template versioning / rollback: Each template is automatically versioned during save or deploy, giving the Deployer the ability to rollback to previous template versions. During rollback the Deployer can see the GitHub style diff between two versions and decide to proceed with the rollback.

Since Nexus Dashboard Orchestrator change management is fully API based, IT can integrate the workflow with in-house tools currently in use.

3. Unify Hybrid Cloud Operations with Nexus Dashboard One View

With Nexus Dashboard 2.1, IT can operate their distributed environment across multiple clusters from a single focal point of control, with the ability to span visibility into fabrics. The scale out architecture adapts to growing operational needs while the One View capability provides a single pane of glass experience, with support for Single Sign On (SSO) and Role Based Access controls (RBAC). This enables operators to consume insights, advisory, and assurance stack as a unified offering to address prevention, diagnosis, and remediation.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification
Cisco Nexus Dashboard One View
 
Nexus Dashboard 2.1 takes visibility of network traffic up a notch with support for flow drops, giving IT the ability to identify packet drops in the network as well as the location and reasons. Flows impacted due to events in a switch like buffer, policer, forwarding drops, ACL drops, policer drops, etc. are identified using Flow Table Events (FTE).

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification
Cisco Nexus Dashboard Data Broker

In addition, Cisco Nexus Dashboard Data Broker (NDDB) is a one of the newest Nexus Dashboard service that facilitates visibility by filtering the aggregated traffic and forwarding traffic of interest to the tools for analysis. It is a multi-tenant-capable solution that can be used with both Cisco Nexus and Cisco Catalyst fabrics.

4. Predictive Change Management with Nexus Dashboard Insights

IT can now predict the impact of the intended configuration changes to reduce risk.

◉ Test and validate proposed configurations before rolling out the changes
◉ Proactive checks to prevent compliance violations, while minimizing downtime and Total Cost of Ownership
◉ Continuous assurance to address compliance and security posture

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification
Predictive Change Management with Nexus Dashboard Insights

5. Nexus Dashboard APIs: Automation and Operational Agility for NetOps and DevOps

Cisco Nexus Dashboard now enables a rich suite of services through APIs for third-party developers to build custom apps and integrations. Nexus Dashboard APIs enable automation of intent using policy, lifecycle management, and governance with a common workflow. For example, IT can consume ITSM and SIEM solutions with ServiceNow and Splunk apps available through Nexus Dashboard.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification

The HashiCorp Terraform and Red Hat Ansible modules published for Nexus Dashboard enables DevOps, CloudOps, and NetOps teams to drive infrastructure automation, maintain network configuration as code, and embed the infrastructure config as part of the CI/CD pipeline for operational agility.

Our Customers Love Nexus Dashboard, and You Will Too!


As a unified, simple to use automation and operations platform, Cisco Nexus Dashboard is the focal point that customers such as T-Systems can use to build, operate, monitor, troubleshoot, and manage their hybrid cloud networking infrastructure.

Cisco Hybrid Cloud Networking, Cisco Nexus Dashboard, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Certification

Are You Ready for Simplicity?


In IT operations, network automation is the key to simplify hybrid cloud complexity, meet KPIs, and increase ROI. Incorporating the needs of NetOps, DevOps, SecOps and CloudOps for full lifecycle operations is table stakes to make this a reality. The latest updates to Cisco Nexus Dashboard deliver the simplicity expected by IT operations teams to become a trusted partner in their digital transformation journey.

Source: cisco.com

Saturday, 23 October 2021

The Future of Broadcast: The All-IP Olympics

This summer, we witnessed the future of broadcasting, and it wasn’t the first time the Olympics were involved. When the Games were first held in Tokyo in 1964, it made history for being the first live televised broadcast. Fifty-seven years later, with the help of 6,700 pieces of Cisco equipment, NBC Olympics was able to deliver more than 7,000 hours of coverage across multiple platforms. The ingenuity behind the scenes was Cisco helping power the first all-IP production in the host city for NBC Olympics’ coverage of the Games.

IP networking is a proven and robust technology, as evidenced by the IP-based enterprise networks that support so many businesses and organizations. The tremendous benefit of IP is that it enables new workflows that simply aren’t possible with legacy video technology. These new workflows enable broadcasters to fundamentally transform how they create and deliver content while lowering their operating expenses. And they can do this without negatively impacting the reliability or real-time delivery of content.

Improving Capabilities & Visibility

Consider a workflow like distributed production (see Figure 1). Traditionally, all participants in a live broadcast, from those being filmed to those doing the filming, had to be in the same location. With distributed production, each group can be in its own location. A host or commentator could be on one continent while athletes are on another, and the production team is yet again somewhere else. This allows for a lighter onsite crew and for production teams to work in their home production studios with full access to all of their usual tools and equipment.

Cisco Preparation, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Learning, Cisco Study Materials
Figure 1: A distributed production workflow allows production, participants, and commentators to be located anywhere in the world.

This was truer than ever for NBC Olympics because of COVID-19. Production was split between crews in Tokyo and employees back at NBC Olympics’ studios in Stamford, New York, Englewood Cliffs, Miami, and Sky Sports in the UK. There was increased importance on being able to send content back to the video team for editing and post-production before being distributed. Reliability, always important, was even more vital due to the scale of these Games.

Delivering Live Production


To deliver live production, the IP network at the IBC had to guarantee reliable transport of uncompressed video (SMPTE 2110). Cisco’s Nexus 9000 switches, deployed in a hybrid spine-leaf network, made this possible running with Cisco’s innovative Non-Blocking Multicast (NBM) technology. NBM provides end-to-end bandwidth guarantees for all multicast flows without relying on the traditional “equal cost, multipath-based” load balancing of flow. The flexibility of IP ensured that all flows within the IBC were reliable while meeting the capacity demands. Along with NBM, the Nexus 9000 switches distributed timing at scale using Precision Time Protocol (PTP). This ensured all endpoints were always in sync with nanosecond precision.

In addition, Cisco Nexus 9000 switches powered by Cisco’s Cloud scale ASICs, provided granular visibility into critical aspects of the network, including tracking the bitrate of every multicast flow and following flow paths as signals travelled through the network and streaming all of this information real-time using software and hardware telemetry to Nexus Dashboard Fabric Controller (see Figure 2).

Cisco Preparation, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Learning, Cisco Study Materials

Cisco Preparation, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Learning, Cisco Study Materials
Figure 2: Flow analytics track the bitrate of every single flow in the network.

Simplification and automation were critical given the live nature of the Olympics. There wasn’t time for a tech to log into a switch and scan a session log to figure out an issue. Using the Nexus Dashboard and Cisco Nexus Dashboard Fabric Controller (NDFC) gave NBC Olympics a single pane of glass approach to network management. Combined with the granular visibility of Cisco Nexus 9000 switches (see Figures 3 and 4), NDFC provided real-time insights into network performance, all the way to the application level. This enables NBC Olympics to identify and resolve issues before they became problems that can impact the quality of broadcasting.

Cisco Preparation, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Learning, Cisco Study Materials
Figure 3: The Cisco Nexus Dashboard provides flow information.

Cisco Preparation, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Learning, Cisco Study Materials
Figure 4: Monitoring precision time protocol performance on Cisco Nexus 9000 switches.

In addition to increasing reliability and simplifying management, NBC Olympics also recognized substantial operational savings with an all-IP distributed production approach. While COVID-19 necessitated a reduced crew on the ground in Tokyo, the technology enabled teams in different countries or regions to do their work from their home base.

The flexibility of all-IP production also enables network and production investment to be used in different events around the world. This reduces the overall carbon footprint of the entire industry and create long-term operational savings while optimizing workflows.

Source: cisco.com

Friday, 8 October 2021

Eliminate Network Blind Spots with Visibility from Cisco Nexus 9000 Switches and ThousandEyes

Your organization depends on your network. As networks become more and more complex, the question arises: How do you know what the network is really doing?

Today’s data centers can extend far beyond their on-premises physical location. Data and applications can be with a co-location provider or across multiple cloud providers. For many organizations, data is distributed all around the globe in a web of micro-services and containers and, consequently, outside direct view and control.

With this wide variation of locations, the deployment of Cisco Nexus 9000 switches varies as well. They might provide a Data Center Interconnect (DCI), Cloud to Cloud Connectivity, or external connectivity to sites on the Internet. However, across this vast variation in deployment use-cases, one thing is common—there can be blind spots!

Read More: 300-710: Securing Networks with Cisco Firepower (SNCF)

Consider that, whether for the Internet, Cloud Connectivity or Data Center Interconnect, the transport infrastructure is often provided by an external entity. This external entity, either a Service Provider or your Backbone team, more than likely doesn’t give you operational access and visibility into what some might call the “sausage making” of networking. And that limits visibility and therefore control.

Gaining Deeper Visibility

Visibility into transport infrastructure is essential to optimize the efficient and reliable management of the network. Deeper visibility provides key performance indicators (KPI) such as throughput, path information, latency, jitter, and loss. This information assists in rapidly detecting and remediating transient network degradations—those that can only be detected with continuous monitoring of KPIs over time. Even more importantly, recording this data effectively provides visibility back in time to not just mitigate issues, but to identify and correlate their root causes so they can be eliminated before they reoccur.

In the past, IT used a variety of approaches to attempt to collect actionable KPI data. For example:

◉ Placing compute resources in a co-location for the purpose of collecting telemetry data

◉ Connecting a server outside the DMZ for the purpose of measuring performance

◉ Adding a collector to the DCI to provide visibility

Cisco Nexus 9000, Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Certifications, Cisco Career
Figure 1. A Server Used as a Telemetry Sensor

However, as Figure 1 shows, they might not be in the exact data path through which all traffic will pass. Also, using passive data collection does not provide critical visibility into the network paths that data traverses.

Integration with a ThousandEyes


In August 2020, Cisco completed the acquisition of ThousandEyes, an Internet and cloud intelligence platform capable of expanding visibility into, and delivering insights about, the digital delivery of applications and services over the Internet and the cloud. With Cisco’s strong Cloud and Data Center network portfolio, the integration of the ThousandEyes vantage points into the Nexus 9000 enables unprecedented visibility through ThousandEyes from Nexus 9000 switches.

Instead of placing additional compute resources in co-locations, connecting them outside your DMZ, or adding them to your DCI, you can install ThousandEyes Enterprise Agents on Cisco Nexus 9000 switches. The agents measure across the exact paths that data passes gathering crucial KPIs wherever a Cisco Nexus 9000 is present (see Figure 2).

Cisco Nexus 9000, Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Certifications, Cisco Career
Figure 2. Cisco Nexus 9000 hosting ThousandEyes Enterprise Agent

ThousandEyes and Nexus 9000 Integration Details


The Cisco Nexus 9000, in ACI or NX-OS mode, provides a hosting environment embedded in the switch’s Network Operating System (NOS) itself. Within NX-OS is a dedicated and secured Linux Container (sLXC) environment for the ThousandEyes Enterprise Agent called Guest-Shell. The agent is hosted in the sLXC and can access the switch’s bridging and routing tables for all its reachability needs. As communication to and from the agent resides in the Nexus 9000 itself, Control Plane Policing (CoPP) can enforce the allowed data-rate for additional protection. Figure 3 shows a schematic diagram of the ThousandEyes Enterprise Agent in a Cisco Nexus 9000 with NX-OS.

Cisco Nexus 9000, Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Certifications, Cisco Career
Figure 3. ThousandEyes Enterprise Agent hosting in Cisco Nexus 9000 (NX-OS)

Scalability, of course, is a key consideration. With tens, hundreds, or even thousands of switches in a network, simplified agent lifecycle management is crucial. While the ThousandEyes Enterprise Agent can be manually installed into the NX-OS Guest-Shell, the Cisco Nexus Dashboard Fabric Controller (NDFC) provides an integrated workflow to activate the functionality with a single click (see Figure 4).

Cisco Nexus 9000, Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Certifications, Cisco Career
Figure 4. Agent Install on Cisco Nexus 9000 (NX-OS)

The automated install/uninstall in NDFC, provides all necessary configuration settings so the latest version of the ThousandEyes Enterprise Agent can be directly downloaded from the Cisco Repository. Furthermore, the agent will also be automatically onboarded to the ThousandEyes Dashboard for the Test Setup phase of deployment. While Cisco NDFC provides unified configuration and installation of agents, you can still choose to use other tools such as Ansible Playbooks to perform these tasks.

Better Together for Deep Visibility


Operating a data center network requires a versatile and flexible approach to management with deep visibility into the network, including transport infrastructure. The deep linking and integration of Data Center Interconnect (inter-DC) visibility (ThousandEyes) with data center infrastructure (Cisco Nexus 9000) provides access to the KPIs needed to measure performance, quickly detect and resolve network issues, and correlate root causes to eliminate issues in the future.

Cisco continues to integrate new capabilities into Cisco Nexus Dashboard to provide a granular view into the many corners of the extended enterprise network. Today Nexus Dashboard has deepening integrations with Nexus 9000 switches, Cisco Insights, App Dynamics, and of course ThousandEyes, to improve end-to-end visibility from data center, to cloud, to applications and the workforce. With Cisco Nexus Dashboard as a single-point of control for visibility and insights, IT has the ability to foresee and mitigate many of the potential issues that impact the workforce and business operations before they become impediments to progress and profits. And so, the journey continues…

Source: cisco.com

Friday, 16 July 2021

Nanosecond Buffer Visibility with Hardware-Based Microburst Detection

What Are Microbursts and Why Do They Matter?

Ever wondered why a switch interface shows an average utilization of well below wire rate, and yet egress discards are incrementing? Most likely, that interface is experiencing microbursts. Often, when multiple input interfaces simultaneously receive traffic destined to a single egress interface – a so-called “incast” traffic pattern – no problem arises because the instantaneous receive rate is low enough that the output interface can handle the load.

The term “microburst” refers to the same situation, but where the receive rate of those interfaces in aggregate exceeds the wire rate of the output interface for some time. In this case, the excess traffic must be buffered. If enough such traffic arrives simultaneously, the buffer on the output interface can fill and potentially overflow, resulting in discards. Figure 1 illustrates the microburst concept.

Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 1: Microburst Concept

In the example shown in Figure 1, three interfaces simultaneously receive a series of back-to-back packets with a minimum inter-packet gap (IPG). The destination must transmit those packets but can only transmit at the maximum rate of the output interface. In this case, all four interfaces are the same speed, so the transmit interface is forced to buffer the excess traffic. If the burst is short-lived, the transmit interface will eventually empty the buffer and only a small latency penalty is paid. But if these traffic bursts last long enough, the buffer can overflow, resulting in egress discards. While at times packet drops are benign or at least productive – for example, randomly dropping frames to prevent congestion buildup while avoiding TCP window synchronization – they can also negatively impact application performance, not to mention simply causing concern among network operations staff.

If egress interface discards are incrementing, how can it be confirmed that microbursts are indeed occurring, and if so, how often and how long-lived they are? Is congestion only occasional, or is a given interface perennially congested, which might warrant workload redistribution, configuration changes, or other action? Traditional methods such as monitoring interface counters do not offer the needed visibility – such counters are typically read by software at relatively long intervals (often 10 seconds or more) and therefore tend to “smooth out” bursty traffic patterns. That’s where the Cisco Nexus 9000 series Data Center switches come into the picture.

What Is Hardware-Based Microburst Detection and How Does It Work?


Cisco Nexus 9000 series Data Center switches, including both fixed-form-factor Nexus 9300-EX/FX/FX2/FX3/GX (as well as the 9364C and 9332C) and modular Nexus 9500-EX/FX/GX platforms, provide advanced hardware capabilities that make detecting and measuring microbursts easy. Based on custom Cisco silicon known as the Cloud Scale ASIC family, these switches provide granular per-interface per-queue monitoring for hard-to-identify traffic microbursts, for both unicast and multicast traffic.

Each queue is instrumented with trigger-based microburst measurement capabilities. When the buffer utilization for a monitored queue crosses a configurable “rising” threshold, the silicon captures the exact moment that threshold was reached using a nanosecond granularity timestamp; as the buffer continues to fill, the “peak” depth of that queue is recorded along with another timestamp; and finally, as the queue drains, a third and final timestamp is recorded as the queue drops below a “falling” threshold. The result is a series of raw records that looks like the output in Figure 2.

Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 2: Raw microburst records (NX-OS)

Consuming Microburst Data for Analysis


Now that we’re able to detect when, how often, and how severe microburst activity is, what can we do with that data? Of course, you can always observe the burst data directly on the switch (running NX-OS software), using the “show queuing burst-detect” command. This option is the most basic and may suffice for certain situations – a quick spot check of activity on an interface or queue for example – but in most cases, you’ll want to retrieve the data from the switch for consumption and analysis by other systems.

The powerful streaming telemetry capability in NX-OS software offers an excellent option for getting microburst data off of the switching infrastructure and into other systems for further analysis, trending, correlation, and visualization. NX-OS software streams telemetry data using JSON or Google Protocol Buffer (GPB) encoding over a variety of transport options, allowing platforms provided by Cisco, third parties, or developed directly by IT to easily ingest and parse the data generated by the switching infrastructure.

The Cisco Nexus Dashboard Insights application easily handles configuration, consumption, and analysis of microburst data from one or more switch fabrics—both NX-OS based as well as ACI based—quickly alerting network operators of excessive microburst activity across the network. Figure 3 shows an example of a microburst-related anomaly generated by Nexus Dashboard Insights upon observing multiple microburst events occurring on a given interface over a short period of time.

Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career

Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Career
Figure 3: Nexus Dashboard Insights Microburst Anomaly

As shown in Figure 3, Nexus Dashboard Insights not only identifies the device, interface, and queue experiencing microbursts, but also correlates those burst events to monitored flows traversing the interface that may have contributed to the burstiness, based on flows with the largest measured max burst values. This detailed information provides an unprecedented level of visibility into network behavior, enabling network operators to quickly identify and remediate congestion hot-spots network-wide.

Key Takeaways


Sometimes, the whole is greater than the parts – that’s certainly the case with the advanced hardware capabilities of Cisco Nexus 9000 series switches, the standards-based streaming telemetry provided by NX-OS, and the cutting-edge microservices-based Day 2 Operations functions provided by the Nexus Dashboard Insights application. Together, these technologies greatly simplify the process of identifying congestion in the network before it becomes a significant problem, making network operations teams more productive and more effective than ever before!

Source: cisco.com

Sunday, 24 January 2021

Dynamic Service Chaining in a Data Center with Nexus Infrastructure

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

In an application-centric data center, the network needs to have maximum agility to manage workloads and incorporate services such as firewalls, load balancers, proxies and optimizers. These network services enhance compliance, security, and optimization in virtualized data centers and cloud networks. Data center ops teams need an elegant method to insert service nodes and have the ability to automatically redirect traffic using predefined rules as operations change.

Enterprises running their data centers on the Nexus 9000 and NX-OS platform can now seamlessly integrate service nodes into their data center and edge deployments using the new Cisco Enhanced Policy Based Redirect (ePBR) to easily define and manage rules that control how traffic is redirected to individual services.

Challenges with Service Insertion and Service Chaining

The biggest challenge when it comes to introducing service nodes in a data center is onboarding them into the fabric, and subsequently creating the traffic redirection rules. Today, there are two ways of implementing traffic redirection rules – by influencing the traffic path using routing metrics, or by selective traffic redirection using policy-based routing.

The challenge with using routing to influence the forwarding path is that all traffic traverses the same path. This often ends up making the service node a bottle neck. The only practical way to achieve scale is by vertically scaling the node, which is expensive and  limited by the extent the node can be expanded.

Policy Based Routing (PBR) rules are also complex to maintain since separate rules are needed for forward and reverse traffic directions in order to maintain symmetry for stateful service nodes. In addition, when there are multiple service nodes in a chain, maintaining PBR rules to redirect traffic across them increases complexity even more.

Introducing Enhanced Policy Based Redirect

NX-OS version 9.3(5) provides Enhanced Policy Based Redirect. The goal of ePBR is to solve some of the challenges with existing redirection rules. In a nutshell, ePBR:

◉ Simplifies onboarding service nodes into the network

◉ Creates selective traffic redirection rules across a single node or a chain of service nodes

◉ Auto-generates reverse redirection rules to maintain symmetry across a service node chain

◉ Provides the ability to redirect and load-balance

◉ Supports pre-defined and customizable probes to monitor the health of service nodes

◉ Supports the ability to either drop traffic, bypass a node, or fallback to routing lookup when a node in a chain fails

ePBR supports all of these capabilities across a fabric running VXLAN with BGP EVPN, as well as a classic core, aggregation, access data center deployment, at line rate switching, with no penalty to throughput or performance. Let’s look at three ePBR use cases.

Use Case 1: ePBR for Selective Traffic Redirection

Various applications may require redirection across different sets of service nodes. With ePBR, redirection rules can match application traffic using Source Destination IP and L4 ports and redirect them across different service nodes or service chains. In the diagram below, client traffic for Application 1 traverses the firewall and IPS, whereas Application 2 traverses the proxy before reaching the server. This flexibility that ePBR enables customers to on-board multiple applications on their network and comply with security requirements.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 1: ePBR for Selective Traffic Redirection

Use Case 2: Selective Traffic Redirection Across Active/Standby Service Node Chain


In this use case, traffic from clients is redirected to a firewall and load-balancer service chain, before being sent to the server. Using probes, ePBR intelligently tracks which node in each cluster is active and automatically redirects the traffic to a new active node if the original active node fails. In this example, the service chain is inserted in a fabric running VXLAN. As a result, traffic from clients is always redirected to the active firewall and then the active load-balancer.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 2: Selective Traffic Redirection Across Active/Standby Service Node Chain

Use Case 3: Load-Balancing Across Service Nodes


With exponential growth in traffic, ePBR can intelligently load-balance across service nodes in a cluster, providing the ability to horizontally scale the network. ePBR ensures symmetry is maintained for a given flow by making sure that traffic in both forward and reverse directions is redirected to the same service node in the cluster. The example below shows how traffic inside a mobile packet core is load-balanced across a cluster of TCP optimizers.

Cisco Preparation, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Use Case 3: Load-Balancing Across Service Nodes

Improving Operational Efficiency with Innovations in Cisco ASICs and NX-OS

Cisco continues to provide value to our customers by fully leveraging capabilities designed into Cisco ASICs and innovations in NX-OS software. ePBR enables the rapid on-boarding of a variety of services into data center networks, and simplifies how traffic chaining rules are setup, thus reducing time spent provisioning services and improving overall operational efficiency.

Friday, 4 December 2020

All Tunnels Lead to GENEVE

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides

As a global citizen, I’m sure you came here to read about Genève (French) or Geneva (English), the city situated in the western part of Switzerland. It’s a city or region famous for many reasons including the presence of a Cisco R&D Center in the heart of the Swiss Federal Institute of Technology in Lausanne (EPFL). While this is an exciting success story, the GENEVE I want to tell you about is a different one.

GENEVE stands for “Generic Network Virtualization Encapsulation” and is an Internet Engineering Task Force (IETF) standards track RFC. GENEVE is a Network Virtualization technology, also known as an Overlay Tunnel protocol. Before diving into the details of GENEVE, and why you should care, let’s recap the history of Network Virtualization protocols with a short primer.

Network Virtualization Primer

Over the course of years, many different tunnel protocols came into existence. One of the earlier ones was Generic Routing Encapsulation (GRE), which became a handy method of abstracting routed networks from the physical topology. While GRE is still a great tool, it lacks two main characteristics that hinder its versatility:

1. The ability to signal the difference of the tunneled traffic, or original traffic, to the outside—the Overlay Entropy—and allow the transport network to hash it across all available links.

2. The ability to provide a Layer-2 Gateway, since GRE was only able to encapsulate IP traffic. Options to encapsulate other protocols, like MPLS, were added later, but the ability to bridge never became an attribute of GRE itself.

With the limited extensibility of GRE, the network industry became more creative as new use-cases were developed. One approach was to use Ethernet over MPLS over GRE (EoMPLSoGRE) to achieve the Layer-2 Gateway use case. Cisco called it Overlay Tunnel Virtualization (OTV). Other vendors referred to it as Next-Generation GRE or NVGRE. While OTV was successful, NVGRE had limited adoption, mainly because it came late to Network Virtualization and at the same time as the next generation protocol, Virtual Extensible LAN (VXLAN), was already making inroads.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
A Network Virtualization Tunnel Protocol

VXLAN is currently the de-facto standard for Network Virtualization Overlays. Based on the Internet Protocol (IP), VXLAN also has an UDP header and hence belongs to the IP/UDP-based encapsulations or tunnel protocols. Other members of this family are OTV, LISP, GPE, GUE, and GENEVE, among others. The importance lays in the similarities and their close relation/origin within the Internet Engineering Task Force’s (IETF) Network Virtualization Overlays (NVO3) working group.

Network Virtualization in the IETF


The NVO3 working group is chartered to develop a set of protocols that enables network virtualization for environments that assume IP-based underlays—the transport network. A NVO3 protocol will provide Layer-2 and/or Layer-3 overlay services for virtual networks. Additionally, the protocol will enable Multi-Tenancy, Workload Mobility, and address related issues with Security and Management.

Today, VXLAN acts as the de-facto standard of a NVO3 encapsulation with RFC7348 ratified in 2014. VXLAN was submitted as an informational IETF draft and then become an informational RFC. Even with its “informational” nature, its versatility and wide adoption in Merchant and Custom Silicon made it a big success. Today, we can’t think of Network Virtualization without VXLAN. When VXLAN paired up with BGP EVPN, a powerhouse was created that became RFC8365—a Network Virtualization Overlay Solution using Ethernet VPN (EVPN) that is an IETF RFC in standards track.

Why Do We Need GENEVE if We Already Have What We Need?


When we look to the specifics of VXLAN, it was invented as a MAC-in-IP encapsulation over IP/UDP transport, which means we always have a MAC-header within the tunneled or encapsulated packets. While this is desirable for bridging cases, with routing it becomes unnecessary and could be optimized in favor of better payload byte usage. Also, with the inclusion of an inner MAC-header, signaling of MAC to IP bindings becomes necessary, which needs either information exchanged in the control-plane or, much worse, flood-based learning.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Compare and Contrast VXLAN to GENEVE Encapsulation Format

Fast forward to 2020, GENEVE has been selected as the upcoming “standard” tunnel protocol. While the flexibility and extensibility for GENEVE incorporates the GRE, VXLAN, and GPE use-cases, new use-cases are being created on a daily basis. This is one of the most compelling but also most complex areas for GENEVE. GENEVE has a flexible option header format, which defines the length, the fields, and content depending on the instruction set given from the encapsulating node (Tunnel Endpoint, TEP). While some of the fields are simple and static, like bridging or routing, the fields and format used for telemetry or security are highly variable for hop-by-hop independence.

While GENEVE is now an RFC, GBP (Group Based Policy), INT (In-band Network Telemetry) and other option headers are not yet finalized. However, the use-case coverage is about equal to what VXLAN is able to do today. Use cases like bridging and routing for Unicast/Multicast traffic, either in IPv4 or IPv6 or Multi-Tenancy, have been available for VXLAN (with BGP EVPN) for almost a decade. With GENEVE, all of these use-cases are accessible with yet another encapsulation method.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
GENEVE Variable Extension Header

With the highly variable but presently limited number of standardized and published Option Classes in GENEVE, the intended interoperability is still pending. Nevertheless, GENEVE in its extensibility as a framework and forward-looking technology has great potential. The parity of today’s existing use cases for VXLAN EVPN will need to be accommodated. This is how the IETF prepared BGP EVPN from its inception and more recently published the EVPN draft for GENEVE.

Cisco Silicon Designed with Foresight, Ready for the Future


While Network Virtualization is already mainstream, the encapsulating node or TEP (Tunnel Endpoint) can be at various locations. While a tunnel protocol was often focused on a Software Forwarder that runs on a simplified x86 instruction set, mainstream adoption is often driven by the presence of Software as well as Hardware forwarder, the latter built into the switch’s ASIC (Merchant or Custom Silicon). Even though integrated hybrid overlays are still in their infancy, the use of Hardware (the Network Overlay) and Software (the Host Overlay) in parallel are widespread, either in isolation or as ships in the night. Often it is simpler to upgrade the Software forwarder on a x86 server and benefit from a new encapsulation format. While this is generally true, the participating TEPs require consistency for connections needed with the outside world and updating the encapsulation to such gateways is not a simple matter.

In the past, rigid Router or Switch silicon prevented fast adoption and evolution of Network Overlay technology. Today, modern ASIC silicon is more versatile and can adapt to new use cases as operations constantly change to meet new business challenges. Cisco is thinking and planning ahead to provide Data Center networks with very high performance, versatility, as well as investment protection. Flexibility for network virtualization and versatility of encapsulation was one of the cornerstones for the design of the Cisco Nexus 9000 Switches and Cloud Scale ASICs.

We designed the Cisco Cloud Scale ASICs to incorporate important capabilities, such as supporting current encapsulations like GRE, MPLS/SR and VXLAN, while ensuring hardware capability for VXLAN-GPE and, last but not least, GENEVE. With this in mind, organizations that have invested in the Cisco Nexus 9000 EX/FX/FX2/FX3/GX Switching platforms are just a software upgrade away from being able to take advantage of GENEVE.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Cisco Nexus 9000 Switch Family

While GENEVE provides encapsulation, BGP EVPN is the control-plane. As use-cases are generally driven by the control-plane, they evolve as the control-plane evolves, thus driving the encapsulation. Tenant Routed Multicast, Multi-Site (DCI) or Cloud Connectivity are use cases that are driven by the control-plane and hence ready with VXLAN and closer to being ready for GENEVE.

To ensure seamless integration into Cisco ACI, a gateway capability becomes the crucial base functionality. Beyond just enabling a new encapsulation with an existing switch, the Cisco Nexus 9000 acts as a gateway to bridge and route from VXLAN to GENEVE, GENEVE to GENEVE, GENEVE to MPLS/SR, or other permutations to facilitate integration, migration, and extension use cases.

Leading the Way to GENEVE


Cisco Nexus 9000 with a Cloud Scale ASIC (EX/FX/FX2/FX3/GX and later) has extensive hardware capabilities to support legacy, current, and future Network Virtualization technologies. With this investment protection, Customers can use ACI and VXLAN EVPN today while being assured to leverage future encapsulations like GENEVE with the same Nexus 9000 hardware investment. Cisco thought leadership in Switching Silicon, Data Center networking and Network Virtualization leads the way to GENEVE (available in early 2021).

If you are looking to make your way to Geneve or GENEVE, Cisco makes investments in both for the past, present, and future of networking.

Tuesday, 25 August 2020

Multi-Site Data Center Networking with Secure VXLAN EVPN and CloudSec

Transcending Data Center Physical Needs


Maslow’s Hierarchy of Needs illustrates that humans need to fulfill base physiological needs—food, water, warmth, rest—in order to pursue higher levels of growth. When it comes to data center and Data Center Networking (DCN), meeting the physical infrastructure needs are the condition on which the next higher-level capabilities—safety and security—are constructed.

Satisfying the physical needs of a data center can be achieved through the concepts of Disaster Avoidance (DA) and Disaster Recovery (DR).

◉ Disaster Avoidance (DA) can be built on a redundant Data Center configuration, where each data center is its own Network Fault Domain, also called an Availability Zone (AZ).

◉ Building redundancy between multiple Availability Zones creates a Region.

◉ Building redundant data centers across multiple Regions provides a foundation for Disaster Recovery (DR).

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

Availability Zones within a Region

Availability Zones (AZ) are made possible with a modern data center network fabric with VXLAN BGP EVPN. The interconnect technology, Multi-Site, is capable of securely extending data center operation within and between Regions. A Region can consist of connected and geographically dispersed on-premise data centers and the public cloud. If you are interested in more details about DA and DR concepts, watch the Cisco Live session recording “Multicloud Networking for ACI and NX-OS Enabled Data Center Fabrics“.

With the primary basic need for availability through the existence of DA and DR in regions achieved, we can investigate data center Safety needs as we climb the pyramid of Maslow’s hierarchy.

Safety and Security: The Second Essential Need


The data center is, of course, where your data and applications reside—email, databases, website, and critical business processes. With connectivity between Availability Zones and Regions in place, there is a threat of exposing data to threats once it moves outside the confines of the on-premise or colocation centers. That’s because data transfers between Availability Zones and Regions generally have to travel over public infrastructure. The need for such transfers is driven by the requirement to have highly-available applications that are supported by redundant data centers. As data leaves the confinement of the Data Center via an interconnect, safety measures must ensure the Confidentiality and Integrity of these transfers to reduce the exposure to threats. Let’s examine the protocols that make secure data center interconnects possible.

DC Interconnect Evolves from IPSec to MACSec to CloudSec


About a decade ago, MACSec or 802.1AE became the preferred method of addressing Confidentiality and Integrity for high speed Data Center Interconnects (DCI). It superseded IPSec because it was natively embedded into the data center switch silicon (CloudScale ASICs). This enabled encryption at line-rate with minimal added latency or increase in packet size overhead. While these advantages were an advancement over IPSec, MACSec’s shortcomings arise because it can only be deployed between two adjacent devices. When Dark Fiber or xWDM are available among data centers this is not a problem. But often such a fully-transparent and secure service is too costly or not available. In these cases, the choice was to revert back to the more resource-consuming IPSec approach.

The virtue of MACSec paired with the requirements of Confidentiality, Integrity, and Availability (CIA) results in CloudSec. In essence, CloudSec is MACSec-in-UDP using Transport Mode, similar to ESP-in-UDP in Transport Mode as described in RFC3948. In addition to the specifics of transporting MACSec encrypted data over IP networks, CloudSec also carries a UDP header for entropy as well as an encrypted payload for Network Virtualization use-cases.

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

CloudSec carries an encrypted payload for network virtualization.

Other less efficient attempts were made to achieve similar results using, for example, MACSec over VXLAN or VXLAN over IPSec. While secure, these approaches just stack encapsulations and incur higher resource consumption. CloudSec is an efficient and secure transport encapsulation for carrying VXLAN.

Secure VXLAN EVPN Multi-Site using CloudSec


VXLAN EVPN Multi-Site provides a scalable interconnectivity solution among Data Center Networks (DCN). CloudSec provides transport and encryption. The signaling and key exchange that Secure EVPN provides is the final piece needed for a complete solution.

Secure EVPN, as documented in the IETF draft “draft-sajassi-bess-secure-evpn” describes a method of leveraging the EVPN address-family of Multi-Protocol BGP (MP-BGP). Secure EVPN provides a similar level of privacy, integrity, and authentication as Internet Key Exchange version 2 (IKEv2). BGP provides the capability of a point-to-multipoint control-plane for signaling encryption keys and policy exchange between the Multi-Site Border Gateways (BGW), creating pair-wise Security Associations for the CloudSec encryption. While there are established methods for signaling the creation of Security Associations, as with IKE in IPSec, these methods are generally based on point-to-point signaling, requiring the operator to configure pair-wise associations.

A VXLAN EVPN Multi-Site environment creates the ability to have an any-to-any communication between Sites. This full-mesh communication pattern requires the pre-creation of the Security Associations for CloudSec encryption. Leveraging BGP and a point-to-multipoint signaling methods becomes more efficient given that the Security Associates stay pair-wise.

Secure VXLAN EVPN Multi-Site using CloudSec provides state-of-the art Data Center Interconnect (DCI) with Confidentiality, Integrity, and Availability (CIA). The solution builds on VXLAN EVPN Multi-Site, which has been available on Cisco Nexus 9000 with NX-OS for many years.

Secure VXLAN EVPN Multi-Site is designed to be used in existing Multi-Site deployments. Border Gateways (BGW) using CloudSec-capable hardware can provide the encrypted service to communicate among peers while continuing to provide the Multi-Site functionality without encryption to the non-CloudSec BGWs. As part of the Secure EVPN Multi-Site solution, the configurable policy enables enforcement of encryption with a “must secure” option, while a relaxed mode is present for backwards compatibility with non-encryption capable sites.

Secure VXLAN EVPN Multi-Site using CloudSec is available in the Cisco Nexus 9300-FX2 as per NX-OS 9.3(5). All other Multi-Site BGW-capable Cisco Nexus 9000s are able to interoperate when running Cisco NX-OS 9.3(5).

Configure, Manage, and Operate Multi-Sites with Cisco DCNM


Cisco Data Center Network Manager (DCNM), starting with version 11.4(1), supports the setup of Secure EVPN Multi-Site using CloudSec. The authentication and encryption policy can be set in DCNM’s Fabric Builder workflow so that the necessary configuration settings are applied to the BGWs that are part of a respective Multi-Site Domain (MSD). Since DCNM is backward compatible with non-CloudSec capable BGWs, they can be included with one click in DCNM’s web-based management console. Enabling Secure EVPN Multi-Site with CloudSec is just a couple of clicks away.