Showing posts with label Cisco DNA Center. Show all posts
Showing posts with label Cisco DNA Center. Show all posts

Tuesday, 21 November 2023

Cisco DNA Center Has a New Name and New Features

Cisco DNA Center is not only getting a name change to Cisco Catalyst Center, it also offers lots of new features, and add-ons in the API documentation. Let me tell about some of them.

Version selection menu


The first improvement I want to mention is the API documentation version selection drop down menu. You’ll find it in the upper left-hand corner of the page. When you navigate to the API documentation website, by default you land on the latest version of the documentation as you can see in the following image:

Cisco DNA Center Has a New Name and New Features

You can easily switch between different versions of the API documentation from that drop down menu. Older versions of the API will still be named and referenced as Cisco DNA Center while new and upcoming versions will reflect the new name, Cisco Catalyst Center.

Event catalog


The second addition to the documentation that I want to mention is the event catalog. We’ve had several requests from our customers and partners to have the event catalog for each version of Catalyst Center published and publicly available. I am happy to report that we have done just that. You can see in the following image a snippet of the event catalog that can be found under the Guides section of the documentation.

Cisco DNA Center Has a New Name and New Features

Not only is there a list of all the events generated by Catalyst Center, but for each event we have general information, tags, channels, model schema, and REST schema as you can see in the following images:

Cisco DNA Center Has a New Name and New Features

Cisco DNA Center Has a New Name and New Features

List of available reports


Another popular request was to have a list of available reports generated by Catalyst Center published and easily referenced in the documentation. Under the Guides section you can now also find the Reports link that contains a list of all available reports including the report name, description and supported formats. By clicking on the View Name link you can also see samples for each of the reports.

Cisco DNA Center Has a New Name and New Features

OpenAPI specification in JSON format


These are all nice extra features and add-ons. However, my favorite one must be the fact that you can now download the Catalyst Center OpenAPI specification in JSON format! This one has been a long time coming and I’m happy to announce that we finally have it. You can find the download link under the API Reference section.

Cisco DNA Center Has a New Name and New Features

Cisco DNA Center Has a New Name and New Features

Net Promoter Score


We have also enabled NPS (Net Promoter Score) on the Catalyst Center API documentation site. As you navigate the website, a window will pop up in the lower right-hand corner of the page asking you to rate our docs.

Cisco DNA Center Has a New Name and New Features

Your feedback is most welcome


Please do take time to give us feedback on the documentation and tell us what you liked or what we can improve on.

Cisco DNA Center Has a New Name and New Features

Source: cisco.com

Thursday, 11 May 2023

Spend Less Time Managing the Network, More Time Innovating with the Network

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Tutorial and Materials, Cisco Guides

As networks evolve to keep up with the requirements of a distributed hybrid workforce and the need for new B2B and B2C cloud applications, an increasingly complex workload for IT is an inevitable byproduct. Remote workers, collaborative applications, and smart building IoT devices have all added management challenges to the hybrid workplace network. IT teams, already responsible for network device onboarding, availability, and resilience, are taking on AIOps responsibilities for ensuring high application experience. They’re also picking up SecOps oversight for monitoring various endpoints for spoofing threats and malware intrusions. With this growing load of responsibilities, how is IT going to scale and not break?

The answer lies in the past as well as in the future. Twenty years ago, Cisco developed one of the first machine-learning toolsets to analyze vast quantities of telemetry collected from switches, routers, and access points to assist in technical problem resolution. The system, created by the Cisco Advanced Services team, was called Network Profile (NP). Built on top of one of the first network-specific data lakes, NP helped customers understand the current state of their networks and enabled Cisco technicians to quickly troubleshoot network issues.

Since then, Cisco has worked diligently to augment the intelligence inherent in the network. Today, the continuously evolving NP is an integral part of the Cisco CX Cloud and is tightly integrated with Cisco DNA Center. Cisco DNA Center Analytics, like NP and Site Analytics, and automations like the Machine Reasoning Engine, make network pros more effective by offloading repetitive, complex, and time-sensitive tasks that do not directly add new value to the organization.

A key value of applying Machine Learning and Artificial Intelligence engines in conjunction with volumes of operational telemetry is to do simple things simply well and thus enable less experienced NetOps technicians to handle a broader range of maintenance tasks.

Automating Compliance Checks


A great example of this intelligent automation lies in the area of compliance. Cisco DNA Center automates configuration checks of settings—such as certificates and SNMP—across hundreds of controllers. What is usually a time-consuming and tedious task is greatly simplified. Guided automations recommend fixes that IT can quickly implement with a single click. And since this scanning is always on, in real-time, technicians don’t need to remember to set aside time every week to run a network compliance scan. That’s simplification!

Simplifying Device Maintenance


Similarly, when managing thousands of networking devices across campuses, branches, and remote offices, what IT doesn’t know about lingering security issues forces technicians to be reactive rather than proactive. It takes time and expertise to keep up with PSIRT vulnerabilities and patches to network software on thousands of access points and switches.

Cisco DNA Center provides preventative measures for device maintenance. By connecting Cisco DNA Center to Cisco CX Cloud, fixes for known PSIRTs and software patches that IT can identify by existing TAC cases are shared automatically through a Cisco DNA Center dashboard with IT teams operating with relevant infrastructures. The granularity of these notifications extends from controller OS images down to specific device configurations, so only features in use are included in notifications. As a result, instead of discovering that an issue causes a network problem with a known resolution, Cisco DNA Center proactively recommends an appropriate resolution even before a problem occurs. And if a configuration is not using any of the affected features, the controllers will bypass installing unnecessary patches. The result is complexity simplified.

Moving From Reactive to Preventative


Predictive analytics with DNA Center’s Trends and Insights dashboard is an AIOps tool for monitoring the network for changes and anomalies that, while not causing an immediate issue, could become a problem in the future. For example, early warning alerts for events like a gradual increase in wireless interference, a sudden increase in the number of devices connected to the same Access Point, or an IoT device that is pulling 20% more power from a switch can help IT take preventative actions before issues impact workforce performance or network availability. By identifying the signs of looming network problems, Cisco DNA Center keeps NetOps teams ahead of issues instead of constantly chasing them—the empowerment of being proactive versus reactive.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Tutorial and Materials, Cisco Guides
Figure 1. Out of complexity, simplicity with Cisco DNA Center AI/ML and Cisco Knowledgebase.

Optimizing the Network Fabric for Application Performance


Reducing complexity with AI/ML processes that assist IT in optimizing the network enables the best application experience for the workforce and customers. Increasingly this is even more critical as applications are literally everywhere, and so are the people who rely on them to keep operations rolling and interact with the business. Gaining visibility into application usage everywhere in the distributed network enables IT to prioritize network resources for business-critical applications and deprioritize irrelevant business applications.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Tutorial and Materials, Cisco Guides

Take, for example, the fast-growing use of collaboration applications incorporating audio and video, screen sharing, recording, and translation. Cisco DNA Center AIOps features enable IT to proactively manage Microsoft Teams and Cisco WebEx performance. The Applications Dashboard in Cisco DNA Center displays the audio, video, and application share quality of experience for individual or team sessions for both platforms, enabling IT to quickly determine if a problem is inside or outside the network. The dashboard also provides remediation suggestions, such as increasing Wi-Fi coverage in specific areas—before operations are affected. Suppose the problem is outside the enterprise network. In that case, IT can activate Cisco ThousandEyes WAN Insights directly from the dashboard to determine the internet bottleneck or provider causing the issue, along with alternate routing suggestions to fix the performance degradation.

Simplify Networks with a Foundation of Automation and Analytics


We are weaving AI and ML capabilities throughout Cisco software, controllers, and network fabrics to simplify the management of complex networks, including innovations like AI Network Analytics, Machine Reasoning Engine Workflows, Networking Chatbots, AI Spoofing Detection, Group-Based Policy Analytics, and Trust Analytics. These solutions assist IT in directing talent to more innovative projects that add value to the organization, such as securing the remote workforce, managing multi-cloud applications, and implementing a Secure Access Service Edge (SASE) for holistic security across the enterprise.

Cisco DNA Center enables IT to hide complexity and operate massive networks at scale, securely, and with agility. The value of AI/ML in Cisco DNA Center is in the ability of the network to enable an excellent experience for IT personas, which in turn provides an optimal experience for the workforce, along with trust in knowing the network is always watching and self-adjusting.

Source: cisco.com

Thursday, 6 April 2023

Cisco Catalyst IE3100 Rugged Series switches: Big benefits, small footprint

Cisco Catalyst, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Preparation, Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification

Now making its entrance is our latest and most compact industrial managed Ethernet switch, the Catalyst IE3100 Rugged Series. First announced in February 2023, these switches are now shipping and are ready to power your industrial networks, especially in space-constrained deployments, where every inch matters.

Part of a powerhouse family


The Catalyst IE3100 is the latest addition to our comprehensive family of industrial switches—a family that includes switches in various form factors, such as rack-mount, DIN rail mount, IP67 rated, and embedded. These ruggedized switches can resist extreme temperatures, shocks, vibration, and humidity. They are specifically developed for industrial IoT networks and deliver deterministic and extremely fast resiliency for uninterrupted operations.

The Catalyst IE3100 complements the Catalyst IE3x00 family of switches that include the Catalyst IE3200, IE3300, and IE3400. The Catalyst IE3x00 family of switches are DIN rail-mounted and run the same modern IOS-XE operating system that powers our Catalyst 9000 Series enterprise switches. This family features Gigabit Ethernet copper and fiber interfaces, fast convergence in case of failure, and additional enhanced features such as Layer 2 NAT, which makes them a popular choice among many verticals such as manufacturing, roadways, railways, utilities, ports and terminals, mining, and oil and gas.

Cisco Catalyst, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Preparation, Cisco Prep, Cisco Guides, Cisco Learning, Cisco Certification

Stand-out features


In addition to combining the power of Cisco IOS XE with built-in security and Cisco DNA Center for simplified management, the Catalyst IE3100 allows customers to use existing IT investments and knowledge while offering targeted functionality expected by industrial IoT customers, such as:

1. Compact size. Reduce engineering efforts and cost when designing cabinets and other deployment considerations.

2. Fully managed. Administer with Cisco DNA Center for streamlined network management and increased network and device visibility while reducing downtime for routine maintenance.

3. Extend IT practices into your industrial network with IOS XE built-in security, and seamlessly integrate into Cisco security solutions with Cisco Identity Services Engine (ISE), Secure Network Analytics (Stealthwatch), and SecureX. Use 802.1x-based authentication, downloadable ACL lists, and dynamic VLAN assignments for network segmentation to reduce cybersecurity risk.

4. OT mindset. Integrate effortlessly into your industrial network with the features you need, such as L2 NAT for machine builders, IT and OT redundancy protocols, support for EtherNet/IP (CIP), Modbus, PROFINET, SCADA, and more.

5. Flexible deployments.Take advantage of 6, 10, or 20 Gigabit Ethernet ports with two Gigabit SFP uplink ports or two Gigabit combo uplink ports.

Use cases


Too often, unmanaged switches find their way into industrial networks, but such equipment falls short in delivering what today’s enterprises need. Unmanaged switches cannot enforce policies or prioritize or segment traffic, their open ports create security risks, and network monitoring proves difficult. In short, they cannot deliver what is needed.

Being fully managed, the Catalyst IE3100 is in control of the endpoints that get connected, how the data is prioritized for quality of service (QoS), and how the traffic is separated by VLANs. Therefore, it is a strong alternative over unmanaged switches. It is especially beneficial for machine builders who make complex, custom-built turnkey solutions, such as robots and conveyor belts, which have connected devices within their assemblies. The end users will appreciate that these solutions can seamlessly fit within their networks with improved control and an enhanced security posture.

The Catalyst IE3100 is an excellent choice for deployments in confined spaces. Space is a common consideration in cabinets that house several pieces of control equipment in addition to networking, such as those used at roadway intersections, at manufacturing plants, next to railroad tracks, and in solar and wind farms. The ability to use smaller enclosures helps to reduce engineering effort and cost.

Planning space-constrained deployments in industrial settings no longer requires a compromise between size, manageability, and security. With the Cisco Catalyst IE3100 Rugged Series Switches, OT teams can connect more devices, secure them with confidence, and manage them with limitless agility.

The Catalyst IE3100 is the most compact switch in our managed Industrial Ethernet portfolio for your space-constrained use cases.

Source: cisco.com

Saturday, 1 April 2023

Good Friends Say Goodbye as Prime Infrastructure Sunsets

It is with great gratitude and appreciation that we wave goodbye to Cisco Prime Infrastructure. Prime Infrastructure has been helping customers manage their enterprise networks for more than a decade. The first Prime Infrastructure release was in 2011, and the latest and last version of Prime Infrastructure 3.10 was released in September of 2021. On March 31, 2023, Cisco is announcing the End of Life (EoL) for Prime Infrastructure.

Cisco Career, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Exam Guides, Cisco Materials, Cisco Guides, Cisco Learning
Figure 1 – Prime Infrastructure EoL timeline

Cisco Prime Infrastructure provided comprehensive management of wired/wireless access, campus, and branch networks, as well as rich visibility into end-user connection and assurance of application performance. Prime Infrastructure was the first enterprise product to combine the network management of both wired and wireless under a single management application. Cisco Prime Infrastructure also set and raised an industry bar for compliance and reporting functions for network management systems (NMS).

The rise of Intent-Based Networking (IBN), Software Defined Networking (SDN), automation, AI/ML (AIOps), and the need for visibility into user experience and application experience has given rise to Cisco DNA Center.

Cisco DNA Center


Cisco DNA Center is the next-generation platform and continues to raise the bar on what network management should be. Cisco DNA Center provides the network management capabilities previously delivered by Prime Infrastructure but delivers a wide range of new and additional capabilities:

Cisco Career, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Exam Guides, Cisco Materials, Cisco Guides, Cisco Learning
Figure 2 – Cisco DNA Center Pillars

Complete network management system: Cisco DNA Center provides a full range of network visibility and monitoring capabilities complete with discovery, hierarchy, topology, and a comprehensive reporting engine. Additionally, Cisco DNA Center provides a comprehensive collection of “360 views” offering insightful perspectives into overall network health, device health, user health, and application health.

AI/ML analytics platform: Cisco DNA Center leverages Cisco’s industry-leading AI network analytics engine, which brings together machine learning, clustering, machine reasoning, visual analytics, and decades of Cisco networking expertise. This results in the ability to deliver Dynamic Baselining, Personalized Anomaly Detection, Trends, Insights, Comparative Analytics, and Predictive Analytics.  This power combination puts Cisco DNA Center at the forefront of AIOps with unparalleled assurance capabilities.

Automation and Orchestration engine: Cisco DNA Center offers many automation workflows from device upgrades to configuration compliance, automated device onboarding, and troubleshooting. With Cisco DNA Center automation, customers have been able to gain efficiency, consistency, and scalability.

Software Defined Network (SDN): Cisco DNA center enables customers to deploy the Software Defined Access (SDA) with a fabric-based solution enabling a complete zero trust model with macro or micro-segmentation and eliminating many Layer2 limitations and dependencies often seen in legacy networks.

Endpoint identification engine, Cisco DNA Center provides advanced capabilities to identify and profile endpoints on the network providing next-generation endpoint visibility with AI-driven analytics and network-driven deep packet inspection.

Migration Options


Prime Infrastructure customers have two migration paths:

◉ Customer Managed Solution with Cisco DNA Center
◉ Cloud SaaS Managed solution with the Cisco Meraki Dashboard

Cisco Career, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Exam Guides, Cisco Materials, Cisco Guides, Cisco Learning
Figure 3 – Cisco Network Management Options

For Prime Infrastructure customers who have not migrated to Cisco DNA Center, now is the time to start your migration to the new platform. Cisco provides the ability to run Cisco DNA Center in 3 form factors:

◉ Physical Appliance
◉ Virtual Appliance hosted on AWS public cloud
◉ Virtual Appliance hosted on a private cloud using VMware/ESXi

Migration Tools


Cisco has made available several tools to ease the migration process:

PDART – Prime to DNA Assessment Readiness Tool, you can run this tool on your Prime Infrastructure to check your migration readiness based on your specific Prime utilization.

Cisco Career, Cisco Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Exam Guides, Cisco Materials, Cisco Guides, Cisco Learning
Figure 4 – Cisco PDART Report Example

PDMT – Prime to DNA Migration Tool, this tool will automate the migration process by migrating your hierarchy, devices, maps, AP locations, and various other data elements to accelerate the migration from Prime to Cisco DNA Center and enable the customers to begin leveraging the value and advanced capabilities of Cisco DNA Center quickly.

Migration Services


Cisco offers a range of services to assist customers with the Prime Infrastructure to Cisco DNA migration; for more information about migration services, please contact your account team.

Source: cisco.com

Saturday, 25 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 2

AI Spoofing Detection Architecture and Deployment

Our previous blog post, Designing and Deploying Cisco AI Spoofing Detection, Part 1: From Device to Behavioral Model, introduced a hybrid cloud/on-premises service that detects spoofing attacks using behavioral traffic models of endpoints. In that post, we discussed the motivation and the need for this service and the scope of its operation. We then provided an overview of our Machine Learning development and maintenance process. This post will detail the global architecture of Cisco AISD, the mode of operation, and how IT incorporates the results into its security workflow.

Since Cisco AISD is a security product, minimizing detection delay is of significant importance. With that in mind, several infrastructure choices were designed into the service. Most Cisco AI Analytics services use Spark as a processing engine. However, in Cisco AISD, we use an AWS Lambda function instead of Spark because the warmup time of a Lambda function is typically shorter, enabling a quicker generation of results and, therefore a shorter detection delay. While this design choice reduces the computational capacity of the process, that has not been a problem thanks to a custom-made caching strategy that reduces processing to only new data on each Lambda execution.

Global AI Spoofing Detection Architecture Overview

Cisco AISD is deployed on a Cisco DNA Center network controller using a hybrid architecture of an on-premises controller tethered to a cloud service. The service consists of on-premises processes as well as cloud-based components.

The on-premises components on the Cisco DNA Center controller perform several vital functions. On the outbound data path, the service continually receives and processes raw data captured from network devices, anonymizes customer PII, and exports it to cloud processes over a secure channel. On the inbound data path, it receives any new endpoint spoofing alerts generated by the Machine Learning algorithms in the cloud, deanonymizes any relevant customer PII, and triggers any Changes of Authorization (CoA) via Cisco Identity Services Engine (ISE) on affected endpoints.

The cloud components perform several key functions focused primarily on processing the high volume data flowing from all on-premises deployments and running Machine Learning inference.  In particular, the evaluation and detection mechanism has three steps:

1. Apache Airflow is the underlying orchestrator and scheduler to initiate compute functions. An Airflow DAG frequently enqueues computation requests for each active customer to a queuing service.

2. As each computation request is dequeued, a corresponding serverless compute function is invoked. Using serverless functions enables us to control compute costs at scale. This is a highly efficient multi-step, compute-intensive, short-running function that performs an ETL step by reading raw anonymized customer data from data buckets and transforming them into a set of input feature vectors to be used for inference by our Machine Learning models for spoof detection. This compute function leverages some of cloud providers’ common Function as a Service architecture.

3. This function then also performs the model inference step on the feature vectors produced in the previous step, ultimately leading to the detection of spoofing attempts if they are present. If a spoof attempt is detected, the details of the finding are pushed to a database that is queried by the on-premises components of Cisco DNA Center and finally presented to administrators for action.

Figure 1: Schematic view of Cisco AISD cloud and on-premises components.

Figure 1 captures a high-level view of the Cisco AISD components. Two components, in particular, are central to the cloud inferencing functionality: the Scheduler and the serverless functions.

The Scheduler is an Airflow Directed Acyclic Graph (DAG) responsible for triggering the serverless function executions on active Cisco AISD customer data. The DAG runs at high-frequency intervals pushing events into a queue and triggering the inference function executions. The DAG executions prepare all the metadata for the compute function. This includes determining customers with active flows, grouping compute batches based on telemetry volume, optimizing the compute process, etc. The inferencing function performs ETL operations, model inference, detection, and storage of spoofing alerts if any. This compute-intensive process implements much of the intelligence for spoof detection. As our ML models get retrained regularly, this architecture enables the quick rollout—or rollback if needed—of updated models without any change or impact on the service.

The inference function executions have a stable average runtime of approximately 9 seconds, as shown in Figure 2, which, as stipulated in the design, does not introduce any significant delay in detecting spoofing attempts.

Figure 2: Average lambda execution time in milliseconds for all Cisco AISD active customers between Jan 23rd and Jan 30th

Cisco AI Spoofing Detection in Action


In this blog post series, we described the internal design principles and processes of the Cisco AI Spoofing Detection service. However, from a network operator’s point of view, all these internals are entirely transparent. To start using the hybrid on-premises/cloud-based spoofing detection system, Cisco DNA Center Admins need to enable the corresponding service and cloud data export in Cisco DNA Center System Settings for AI Analytics, as shown in Figure 3.

Figure 3: Enabling Cisco AI Spoofing Detection is very simple in Cisco DNA Center.

Once enabled, the on-prem component in the Cisco DNA Center starts to export relevant data to the cloud that hosts the spoof detection service. The cloud components automatically start the process for scheduling the model inference function runs, evaluating the ML spoofing detection models against incoming traffic, and raising alerts when spoofing attempts on a customer endpoint are detected. When the system detects spoofing, the Cisco DNA Center in the customer’s network receives an alert with information. An example of such a detection is shown in Figure 4. In the Cisco DNA Center console, the network operator can set options to execute pre-defined containment actions for the endpoints marked as spoofed: shut down the port, flap the port, or re-authenticate the port from memory.

Figure 4: Example of alert from an endpoint that was initially classified as a printer.

Protecting the Network from Spoofing Attacks with Cisco DNA Center


Cisco AI Spoofing Detection is one of the newest security benefits provided to Cisco DNA Center operators with a Cisco DNA Advantage license. To simplify managing complex networks, AI and ML capabilities are being woven throughout the Cisco network management ecosystem of controllers and network fabrics. Along with the new Cisco AISD, Cisco AI Network Analytics, Machine Reasoning Engine Workflows, Networking Chatbots, Group-Based Policy Analytics, and Trust Analytics are additional features that work together to simplify management and protect network endpoints.

Source: cisco.com

Tuesday, 21 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 1

The network faces new security threats every day. Adversaries are constantly evolving and using increasingly novel mechanisms to breach corporate networks and hold intellectual property hostage. Breaches and security incidents that make the headlines are usually preceded by considerable recceing by the perpetrators. During this phase, typically one or several compromised endpoints in the network are used to observe traffic patterns, discover services, determine connectivity, and gather information for further exploit.

Compromised endpoints are legitimately part of the network but are typically devices that do not have a healthy cycle of security patches, such as IoT controllers, printers, or custom-built hardware running custom firmware or an off-the-shelf operating system that has been stripped down to run on minimal hardware resources. From a security perspective, the challenge is to detect when a compromise of these devices has taken place, even if no malicious activity is in progress.

In the first part of this two-part blog series, we discuss some of the methods by which compromised endpoints can get access to restricted segments of the network and how Cisco AI Spoofing Detection is designed used to detect such endpoints by modeling and monitoring their behavior.

Part 1: From Device to Behavioral Model

One of the ways modern network access control systems allow endpoints into the network is by analyzing identity signatures generated by the endpoints. Unfortunately, a well-crafted identity signature generated from a compromised endpoint can effectively spoof the endpoint to elevate its privileges, allowing it access to previously unauthorized segments of the network and sensitive resources. This behavior can easily slip detection as it’s within the normal operating parameters of Network Access Control (NAC) systems and endpoint behavior. Generally, these identity signatures are captured through declarative probes that contain endpoint-specific parameters (e.g., OUI, CDP, HTTP, User-Agent). A combination of these probes is then used to associate an identity with endpoints.

Any probe that can be controlled (i.e., declared) by an endpoint is subject to being spoofed. Since, in some environments, the endpoint type is used to assign access rights and privileges, this type of spoofing attempt can lead to critical security risks. For example, if a compromised endpoint can be made to look like a printer by crafting the probes it generates, then it can get access to the printer network/VLAN with access to print servers that in turn could open the network to the endpoint via lateral movements.

There are three common ways in which an endpoint on the network can get privileged access to restricted segments of network:

1. MAC spoofing: an attacker impersonates a specific endpoint to obtain the same privileges.

2. Probe spoofing: an attacker forges specific packets to impersonate a given endpoint type.

3. Malware: a legitimate endpoint is infected with a virus, trojan, or other types of malware that allows an attacker to leverage the permissions of the endpoint to access restricted systems.

Cisco AI Spoofing Detection (AISD) focuses primarily on the detection of endpoints employing probe spoofing, most instances of MAC spoofing, and some cases of Malware infection. Contrary to the traditional rule-based systems for spoofing detection, Cisco AISD relies on behavioral models to detect endpoints that do not behave as the type of device they claim to be. These behavioral models are built and trained on anonymized data from hundreds of thousands of endpoints deployed in multiple customer networks. This Machine Learning-based, data-driven approach enables Cisco AISD to build models that capture the full gamut of behavior of many device types in various environments.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 1: Types of spoofing. AISD focuses primarily on probe spoofing and some instances of MAC spoofing.

Creating Benchmark Datasets


As with any AI-based approach, Cisco AISD relies on large volumes of data for a benchmark dataset to train behavioral models. Of course, as networks add endpoints, the benchmark dataset changes over time. New models are built iteratively using the latest datasets. Cisco AISD datasets for models come from two sources.

◉ Cisco AI Endpoint Analytics (AIEA) data lake. This data is sourced from Cisco DNA Center with Cisco AI Endpoint Analytics and Cisco Identity Services Engine (ISE) and stored in a cloud database. The AIEA data lake consists of a multitude of endpoint information from each customer network. Any personally identifiable information (PII) or other identifiers such as IP and MAC addresses—are encrypted at the source before it is sent to the cloud. This is a novel mechanism used by Cisco in a hybrid cloud tethered controller architecture, where the encryption keys are stored at each customer’s controller.
◉ Cisco AISD Attack data lake contains Cisco-generated data consisting of probe and MAC spoofing attack scenarios.

To create a benchmark dataset that captures endpoint behaviors under both normal and attack scenarios, data from both data lakes are mixed, combining NetFlow records and endpoint classifications (EPCL). We use the EPCL data lake to categorize the NetFlow records into flows per logical class. A logical class encompasses device types in terms of functionality, e.g., IP Phones, Printers, IP Cameras, etc. Data for each logical class are split into train, validation, and test sets. We use the train split for model training and the validation split for parameter tuning and model selection. We use test splits to evaluate the trained models and estimate their generalization capabilities to previously unseen data.

Benchmark datasets are versioned, tagged, and logged using Comet, a Machine Learning Operations (MLOps) and experiment tracking platform that Cisco development leverages for several AI/ML solutions. Benchmark Datasets are refreshed regularly to ensure that new models are trained and evaluated on the most recent variability in customers’ networks.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 2: Benchmark Dataset and Data Split Creation

Model Development and Monitoring


In the model development phase, we use the latest benchmark dataset to build behavioral models for logical classes. Customer sites use the trained models. All training and evaluation experiments are logged in Comet along with the hyper-parameters and produced models. This ensures experiment reproducibility and model traceability and enables audit and eventual governance of model creation. During the development phase, multiple Machine Learning scientists work on different model architectures, producing a set of results that are collectively compared in order to choose the best model. Then, for each logical class, the best models are versioned and added to a Model Registry. With all the experiments and models gathered in one location, we can easily compare the performance of the different models and monitor the evolution of the performance of released models per development phase.

The Model Registry is an integral part of our model deployment process. Inside the Model Registry, models are organized per logical class of devices and versioned, enabling us to keep track of the complete development cycle—from benchmark dataset used, hyper-parameters chosen, trained parameters, obtained results, and code used for training. The models are deployed in AWS (Amazon Web Services) where the inferencing takes place. We will discuss this process in our next blog post, so stay tuned.

Production models are closely monitored. If the performance of the models starts degrading—for example, they start generating too many false alerts—a new development phase is triggered. That means that we construct a new benchmark dataset with the latest customer data and re-train and test the models. In parallel, we also revisit the investigation of different model architectures.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 3: Cisco AI Spoofing Detection Model Lifecycle

Next Up: Taking Behavioral Models to Production in Cisco AI Spoofing Detection


In this post, we’ve covered the initial design process for using AI to build device behavioral models using endpoint flow and classification data from customer networks. In part 2 “Taking Behavioral Models to Production in Cisco AI Spoofing Detection” we will describe the overall architecture and deployment of our models in the cloud for monitoring and detecting spoofing attempts.

Source: cisco.com

Saturday, 7 January 2023

We’ve Doubled the Number of Cisco DNA Center Reservable Sandboxes

Cisco DNA Center, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Material, Cisco Guides, Cisco Learning

The Cisco DNA Center sandboxes have always been in high demand. For a while now we have had two always-on and two reservable sandboxes for Cisco DNA Center. With each of these sandboxes requiring at least one Cisco DNA Center appliance and several Catalyst 9000 switches, it’s easy to see why they were some of the most expensive sandboxes we have. (Hence, the limited number.) Expensive not only because of the hardware appliance and physical Catalyst 9000 switches, but also from a rack footprint, power, and cooling perspective.

Fully test all the features of the Cisco DNA Center platform including building SDA fabrics

Taking advantage of some virtualization secret sauce and holiday magic, the sandbox team has done a tremendous job and they have launched 4 Cisco DNA Center reservable sandboxes. Yes, you’ve read that right! We have doubled the number of Cisco DNA Center reservable sandboxes! And all 4 of them are running the latest version of code 2.3.3.5 as of the writing of this blog and have a Cisco ISE server so you can fully test all the features of the Cisco DNA Center platform including building SDA fabrics. There are two CoreOS virtual machines attached to the access switches for traffic generation and client troubleshooting. We’ve also included a CentOS DevBox that provides a developer environment with Python, virtual environment, Ansible and other tools already preinstalled.

Cisco DNA Center, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Topology of the new reservable sandboxes

Test and develop your applications and integrations


The two always on sandboxes are still there, available at all times. They will also be upgraded to 2.3.3.5 in January, 2023. So, you now have 6 Cisco DNA Center sandboxes available for you to test and develop your own applications and integrations!

Next year will be an even bigger year for Cisco DNA Center sandboxes with the team looking at migrating our current environments to a fully virtual setup taking advantage of the recently announced Cisco DNA Center virtual appliance. This should allow us to better scale our Cisco DNA Center environments and provide even more sandboxes to you, our community.

No cost to you


If you want to discover Cisco DNA Center, explore the REST API interface it provides, or develop your first application or integration using Cisco DNA Center, these sandboxes provided at no cost to you are an invaluable resource!

Source: cisco.com

Sunday, 11 December 2022

Simplify the Adoption of Sustainable Technologies in the Workplace with Cisco DNA Center

Supporting sustainable technologies on a campus network is great for the planet and can substantially lower the cost of workplace operations. But adding hundreds of new IoT devices to a campus network can be a heavy lift for IT teams. Let’s take a look at the many innovations that Cisco has made to address sustainable technology, so that supporting a cleaner planet does not become a burden on IT teams.

For organizations, environmental sustainability is the practice of operating without producing a negative impact on the environment. Certainly, you’ve been hearing a lot about environmental sustainability and how IT can help to reduce your organization’s carbon footprint. When it comes to reducing the environmental impact of offices, factories, and warehouses, IT has a very big role to play. Gartner estimates that “By 2025, 75% of CIOs will be responsible for sustainable technology outcomes and 25% of CIOs will have compensation linked to their sustainable technology impact.” (Gartner Top Strategic Technology Trends for 2023: Sustainable Technology, ID G00774132)

Most IT departments will begin their sustainability work by verifying that IT technologies are being sourced from companies with “Net Zero” policies and programs. Cisco has documented all the steps we’ve taken to create a more sustainable solution for your network. Your next step will be to lower your environmental footprint by deploying new sensor technologies within your campus networks for initiatives such as energy efficiency, water usage, recycling, and site optimization. These technologies will be helpful in your sustainability objectives, but they can become a major source of complexity and time drain for IT teams. So, let’s look at some of the more popular technologies and the recent innovations in Cisco networking solutions that can make deploying them much easier.

Sustainable Technology is Coming to your Campus


The reason I can guarantee that you will soon be deploying sustainable technology is that there are substantial financial rewards for lowering your usage of electricity and material goods. Investments in sustainability are good for the planet and good for your bottom line. Sustainable technology, which is a category of smart building technologies, is a framework of networking solutions that enable businesses to achieve their sustainability goals. These goals usually include a reduction in environmental impact (power, water, recycling, and waste disposal), and optimization of office space and physical assets. Typical devices are automatic window shades that close in direct sunlight, water usage sensors, and of course UPoE+ LED lighting powered by Cisco Catalyst 9000 PoE ports and monitored by Cisco DNA Center. These are popular choices because PoE LED lighting can yield large savings quickly without a complex electrical installation, and water usage sensors are an easy way to detect water leaks – which is the most common and most expensive of office accidents.  The industry for smart building technology is diverse, and you will certainly find an IoT device or sensor for just about any project.

Cisco DNA Center, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Career, Cisco Skills, Cisco Jobs
Figure 1: Architectures for smart buildings

The diagram in Figure 1 above, shows the many categories of smart building technologies, as well as the infrastructure and applications that manage and operate the solution. Cisco has a great webpage on our portfolio for smart buildings where you read more about the solution. Many of these technologies are complements or expansions to projects that your team already supports, but the impact of sustainable technology on your network will be substantial. There will surely be hundreds of new sensors, meters, and control devices on your campus network. Most of these will require PoE and many will require local application servers. There are three categories of Cisco DNA Center innovations that facilitate supporting these devices: (1) connecting and securing, (2) powering, and (3) software management.

Connecting and Securing New IoT Devices 


I’m sure you’ve heard about Cisco DNA Center AI-Endpoint Analytics. This feature is in the Policy section of Cisco DNA Center, and it automatically identifies all new endpoints that connect to the network using a cloud-based device manufacturers database. Endpoints are then added to the inventory dashboard and checks and authentications are made using deep packet inspection (DPI) and machine learning to authenticate that the device is what it says it is. Each device is given a “Trust Score” between 1 (suspicious) and 10 (trustworthy) and you can view a list of the verifications that each device has passed. During the lifecycle of devices, Cisco DNA Center will continue to monitor device behavior and any anomalies (such as sudden changes in communication protocols) will be flagged for attention. Additionally, Cisco DNA Center can be configured to automatically isolate devices that demonstrate behavior anomalies.

Besides security and posture information, endpoint inventory includes the manufacturer, model, OS type, software version, and other management information. You can even register the device with the manufacturer within Cisco DNA Center, and if a software upgrade is available, you will be advised right inside the dashboard. The comprehensive dashboard gives you everything you need to connect, secure, and manage the many new IoT devices on your network.

Cisco DNA Center, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Career, Cisco Skills, Cisco Jobs
Figure 2: AI endpoint analytics aggregates network data to identify endpoints.

Powering IoT Devices and Managing PoE Capacity


As more PoE devices connect to your network, understanding power usage and availability per branch office and per switch will become critical. The PoE Analytics dashboard in Cisco DNA Center gives you quick and easy visibility of your PoE usage everywhere. You can see the status of PoE consumption across your organization: by branch, building, individual switch, or even by type of device. You can view the total power budget available in any switch, as well as what is allocated, remaining, and load. You can verify the actual amount of power being drawn from each device—this is critical since many IoT devices pull more power than their manual indicates. During the lifecycle of these devices, PoE Analytics monitors spikes in power and pushes alerts for any anomaly to the main Cisco DNA Center Assurance dashboard. Any Cisco DNA Center alert can be exported to your ServiceNow (ITSM) or PagerDuty, and PoE alerts are good candidates for immediate attention. The PoE Analytics dashboard in Cisco DNA Center enables you to plan and manage the power of your IoT devices anywhere in your network.

Cisco DNA Center, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Career, Cisco Skills, Cisco Jobs
Figure 3: PoE Analytics facilitates managing power for IoT devices

Edge Compute for Device Software


Another challenge you will likely encounter is the performance of the server software that controls these IoT devices. In many cases, this software is located in the cloud, and the time spent managing it will be minimal. However, some of the more complex sensors may recommend that the server software be installed on-premises for improved performance. This requires either a server in your wiring closet or small Raspberry Pi devices distributed around the campus.

Instead of deploying additional hardware on-site, Cisco DNA Center can help you run these IoT applications on your Catalyst 9000 switches. Cisco Application Hosting on Catalyst 9000 series of switches extend the cloud application to the edge of the network enabling data processing closer to the source for much-improved performance of low-power IoT devices. The app hosting framework inside Catalyst 9000 switches enables off-the-shelf Docker apps, running as separate Linux processes, so they do not affect the switch’s IOS XE performance or security. Installing the application has been streamlined with Cisco DNA Center’s App Hosting Automation dashboard. Simply drag and drop the application into the dashboard and it loads into the Cisco DNA Center’s app hosting library. Then choose the switches where you want the application installed and push them out.

Cisco DNA Center, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Career, Cisco Skills, Cisco Jobs
Figure 4: Using Cisco DNA Center to install apps on your Catalyst 9000 switches

Deploying smart building technology to meet your company goals for sustainability and cost optimization will be a big trend in 2023. Training your staff on Cisco DNA Center will enable you to manage this new technology while maximizing your IT staff’s productivity.

Source: cisco.com

Friday, 11 November 2022

Cisco Champions the Powerful, Evolving Networking Software Stack

Cisco Champions, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Certification, Cisco IOS XE

With the interconnection of billions of devices in public and private networks and many applications and services moving to the cloud, software is increasingly becoming independent of and abstracted from hardware. At public cloud vendors like Amazon Web Services, Google Cloud Platform, and Microsoft Azure, hardware has been commoditized and software has taken center stage.

At Cisco, resellers and enterprise customers put complex solutions together using our products. The integration of switches, routers, and other gear with software used to require up to a one-year qualification cycle. But with the cloud providers, it’s immediate. Today, more native cloud concepts have been added to Cisco IOS XE software. Quarter by quarter, our enterprise software is becoming more efficient and cost-effective, more automated, and more programmable.

From Physical to Virtual to Cloud Native 


The first incarnation of Cisco enterprise cloud-enabled products was the virtualization of physical hardware devices in the cloud as virtual machines. They had all the existing concepts and features customers were used to in existing physical Cisco platforms.

In recent years we’ve been moving from physical to virtual to cloud-native products. As customers are becoming more aware and ready to consume cloud-native features, Cisco IOS XE is being enriched to provide those features. At 190 million lines of code―more than 300 million when vendor software development kits (SDKs) and open-source libraries are added―Cisco IOS XE runs 80+ platforms for access, distribution, core, wireless, and WAN layers. It facilitates a myriad of combinations of hardware and software, forwarding, and physical and virtual form factors.

Why Cisco? 


Prospective Cisco customers and competitors may ask, why spend $5000 for an enterprise switch when you can spend $1000? The answer is that our customers know that buying a cheaper switch may lack the features they need. Less expensive gear will also potentially add to their maintenance costs because the components may not be as good as Cisco’s.

Another reason to buy Cisco is due to the breadth of our enterprise portfolio. Any one company can do one vertical market well. With IOS XE, we have integrated everything across the networking software stack, and across the entire enterprise network, and we’re working to keep it simple across multiple network domains.

Efficiency and Cost-effectiveness 


With networking becoming increasingly feature-rich and complex, simpler networking software translates to greater efficiency, a smaller headcount, and fewer onsite visits to fix problems. For example, Cisco IOS XE provides simplified app hosting using a Docker image in a container and deployment using device controller tools. It supports third-party, off-the-shelf applications built using Linux toolchains that allow business apps to run at the network edge.

Other examples include the simplification of development, debugging, and device validation with Cisco Platform Abstraction (CPA) and unified software tracing that integrates traces from software running anywhere in a network for more complete visibility into 100+ processes in real-time. Another example of Cisco IOS XE simplicity is virtualization technology that runs over optical fiber, enabling switches to be physically located up to thousands of miles away from each other.

The Power of Automation 


Cisco IOS XE is becoming more and more self-driving. Cisco developers are increasingly taking away the manual tasks required to manage the network by automating them. That makes networks easier and cheaper to maintain and faster to debug.

Examples include the automation of image upgrades using Cisco DNA Center and support for programmable microservices to replace manual device upgrades, repurposing, and management. Other automated processes include streaming telemetry and analytics in all layers of software that run at the speed of events observed (e.g., faster than two million route updates per second) to handle the huge scale of networking operations.

Programmability 


Systems administrators in enterprise companies are constantly upgrading, repurposing, and managing thousands of switches. An advanced networking software stack must be able to manage multi-vendor networks using native and open-source data models. Cisco IOS XE supports a suite of Google Remote Procedure Call (gRPC)-based microservices that simplify and lighten workloads with programmability. They allow administrators to programmatically manage Cisco enterprise devices.

The IOS XE Development Environment  


A lot of enterprise software takes years to develop. The Cisco software development environment rolls out new solutions in months.

Developers spend 60-70% of their time developing software instead of application logic. The IOS XE development environment is automating as many common capabilities (like show commands, tracing, telemetry, export for dashboard, hand wiring HA code, testing base ISSU compatibility checks, and mocking for unit tests) as possible to avoid the need to hand code them. With hand coding, every one of these features would require developers to generate two-to-three times as much code. Hand coding is also not amenable to automated, flexible deployments and in the current development trajectory will not fit into the low-footprint devices we ship.

The Cisco Enterprise Networking software development team works at a solution level, conducting pre-qualification testing and providing the tools to control an entire enterprise dashboard from a single dashboard.

Source: cisco.com

Saturday, 5 November 2022

Battle of the Fabrics – The Road to a Future Ready Simple Network

The Evolution of Enterprise Networks for Campus


Digital transformation is creating new opportunities in every industry. In Healthcare doctors can monitor patients remotely and leverage medical analytics to predict health issues. Technology enables a connected campus and more personalized and equal access to learning resources in education. Within retail, shops can provide a seamless, engaging experience in-store and online using location awareness. In the world of finance, technology enables users to securely bank anywhere, anytime, using the device of their choice. In today’s world, digital transformation is essential for businesses to stay relevant.

These digital transformations have required more from networks than ever before. Over time, campus design has been forever changed by the additional demands on the network, each requiring more capabilities and flexibility than previous network designs. Over the past ten years, the enterprise network has continued to evolve from traditional designs to enterprise Fabrics that resemble a service provider design and encompass an Underlay and Overlay.

Fundamentally, it’s essential to understand what typical IT departments, even those segmented within organizations, are attempting to achieve. Ultimately, each company has an IT department to deliver applications that the company relies on to achieve some aim, whether for the public good or for monetary reasons, which could take on many forms, from Manufacturing to Retail, to Financial and beyond. If you look at the core ask, these organizations want a service delivered at some service level to ensure business continuity. For that reason, when the organization introduces new applications or devices, we need to flexibly adopt these new entities securely and simultaneously roll these changes out to the network.

Additionally, more emphasis is being placed on pushing configuration changes quickly, accurately, securely, and at scale while balancing that with accountability. Automation and orchestration are critical to the network of the future, and the ability to tie them into a platform that not only applies configuration but also measures success through both application and user experience is fundamental.

For any organization to successfully transition to a digital world, investment in its network is critical. The network connects all things and is the cornerstone where digital success is realized or lost. The network is the pathway for productivity and collaboration and an enabler of improved end-user experience. And the network is also the first line of defense in securing enterprise assets and intellectual property.

Essentially, everyone in networking is looking for the easy button. We all are looking to reduce the number of devices and complexity while maintaining the flexibility of supporting the business’s priorities from both an application and endpoint perspective. Suppose we can simplify and have the highest available network of the future, which is easily extensible, flexible enough to meet our needs, and is at the same time fully automated and provides telemetry. In that case, we can look at it simply, then perhaps we would head toward that nirvana.

A Fabric can be that solution and is the road to a future-ready, simple network. We remove the reliance on 15 to 20 protocols in favor of 3 to simplify the operational complexities. We fully integrate all wired and wireless access components and utilize the bandwidth available on many links to support future technologies like Wifi 6E and beyond. We should bond policy as part of the ecosystem and use the network to apply and enforce that policy. We can learn intrinsically from the network with telemetry and use Artificial intelligence and Machine Learning to solve issues in a prompted and even automated way. We will discuss all these concepts in more detail in the next couple of sections.

Fabric Overview


Figure 1. Fabric Concepts with Underlay and Overlay
A Fabric is simply an Overlay network. Overlays are created through encapsulation, which adds one or more additional headers to the original packet or frame. An overlay network creates a logical topology to virtually connect devices built over an arbitrary physical Underlay topology.

In an idealized, theoretical network, every device would be connected to every other. In this way, any connectivity or topology imagined could be created. While this theoretical network does not exist, there is still a technical desire to connect all these devices in a full mesh. This is where the term Fabric comes from: it is a cloth where everything is connected. An Overlay (or tunnel) provides this logical full-mesh connection in networking. We would then automate the build of these networks of the future using fewer protocols, replacing or eliminating older L2/L3 protocols (often up to 15-20 protocols) and replacing them with as few as 3 protocols. This allows us to have a simple, flexible, fully automated approach where wired and wireless can be incorporated into the Overlay.

Underlay

The Underlay network is defined by the physical switches and routers used to deploy the Fabric. All network elements of the Underlay must establish IP connectivity via the use of a routing protocol. The Fabric Underlay supports any arbitrary network topology. Instead of using arbitrary network topologies and protocols, the underlay implementation for a Fabric typically uses a well-designed Layer 3 foundation inclusive of the Campus Edge switches, known as a Layer 3 Routed Access design. This ensures the network’s performance, scalability, resiliency, and deterministic convergence.

The Underlay switches support the physical connectivity for users and endpoints. However, end-user subnets and endpoints are not part of the Underlay network and have become part of the automated Overlay network.

Overlay

An Overlay network is a logical topology used to virtually connect devices and is built over an arbitrary physical Underlay topology. The Fabric Overlay network is created on top of the Underlay network through virtualization, creating Virtual Networks (VN). The data, traffic, and control plane signaling are contained within each Virtual Network, maintaining isolation among the networks and independence from the Underlay network. Multiple Overlay networks can run across the same Underlay network through virtualization.

Virtual Networks

Fabrics provide Layer 3 and Layer 2 connectivity across the Overlay using Virtual Networks (VN). Layer 3 Overlays emulate an isolated routing table and transport Layer 3 frames over the Layer 3 network. This type of Overlay is called a Layer 3 Virtual Network. A Layer 3 Virtual Network is a virtual routing domain analogous to a Virtual Routing and Forwarding (VRF) table in a traditional network.

Layer 2 Overlays emulate a LAN segment and transport Layer 2 frames over the Layer 3 network. This type of Overlay is called a Layer 2 Virtual Network. Layer 2 Virtual Networks are virtual switching domains analogous to a VLAN in a traditional network.

Each frame from an endpoint within a VN is forwarded in the encapsulated tunnel toward its destination. Similarly, older designs may have used labels to encapsulate traffic in MPLS networks. To determine where the destination is, we need some form of tracking capability to determine where the target is and where to forward the packet. This is accomplished by the Control Plane of the Fabric. In older MPLS networks, and those used by service providers, the control plane was a combination of LDP/TDP for propagating labels and BGP, which utilized the augmentations for separating routing into various VN’s.

Control Plane

To forward traffic within each Overlay, we need a way of mapping where the sources and destinations are located. Typically, the IP address and MAC address are associated with an endpoint and are used to define its identity and location in the network. The IP address is used to identify at layer 3 who and where the device is on the network. At layer 2, the MAC address can also be used within broadcast domains for host-to-host communications when layer 2 is available. This is commonly referred to as addressing the following topology.  While an endpoint’s location in the network will change, who this device is and what it can access should not have to change.

Additionally, the ability to reduce fault domains and remove Spanning-Tree Protocol (STP) are big differentiators to driving the need for routed access and removing the reliance on technology which often had slower convergence times. To give a Layer 3 Routed Network the same kind of capabilities, we need to first track those endpoints and then forward traffic between them and off the network to destinations when needed for internet connectivity.

This is the role and function of the Control Plane, whose job it is to track Endpoint Identifiers (EID), more commonly referred to as Endpoints within a Fabric Overlay. This allows the Fabric to forward that traffic in an encapsulated packet separating it from the other VN, thus automatically providing Macro Segmentation while allowing it to meander through the Fabric to the destination. There are differing Fabrics, and each Fabric technology utilizes some form of Control Plane to centralize this mapping system which both the borders and edge nodes rely on. Each technology has its pros and cons, which come to form caveats that we must adhere to when designing and correctly choosing between Fabric technologies.

Locator/ID Separation Protocol (LISP) 

Cisco Software-Defined Access (Cisco SD-Access) utilizes the Locator/ID Separation Protocol (LISP) as the Control Plane protocol. LISP simplifies network operations through mapping servers and allows the resolution of endpoints within a Fabric. One of the benefits of this approach is that it is utilized for prefixes not installed in the Routing Information Base. Thus, this is not impactful to edge switches with smaller memory and CPU capabilities to the larger core devices and allows us to expand the Fabric right down to the Edge.

LISP ratified in RFC 6830 allows the separation of identity and location through a mapping relationship of these two namespaces: EID in relationship to its Routing LOCator (RLOC). These EID-to-RLOC mappings are held in the mapping servers, which are highly available throughout the Fabric and which resolve EIDs to RLOCs in the same way Domain Name Servers (DNS) servers resolve web addresses using a PULL type update. This allows for greater scale when deploying the protocols that make up the Fabrics Control Plane. It allows us to fully utilize the capabilities of both Virtual Networks (namespaces) and encapsulation or tunneling. Traffic is encapsulated from end to end, and we will enable the use of consistent IP addressing across the network behind multiple Layer 3 anycast gateways across multiple edge switches. Thus instead of a push from the routing protocol, conversational learning occurs, where forwarding entries are populated in Cisco Express Forwarding only where they are needed.

Figure 2. LISP Control Plane Operation

Instead of a typical traditional routing-based decision, the Fabric devices query the control plane node to determine the routing locator associated with the destination address (EID-to-RLOC mapping) and use that RLOC information as the traffic destination.  In case of a failure to resolve the destination routing locator, the traffic is sent to the default Fabric border node. The response received from the control plane node is stored in the LISP map cache, driving the Cisco Express Forwarding (CEF) table and installed in hardware. This gives us an optimized forwarding table without needing a routing protocol update and saves CPU and memory utilization.

Border Gateway Protocol (BGP) 

Conversely, Border Gateway Protocol (BGP), which has been heavily augmented over the years, was initially designed for routing between organizations across the internet. Kirk Lougheed and Len Bosack of Cisco and Yakov Rekhter of IBM at an Internet Engineering Task Force (IETF) conference co-authored BGP RFC 1105 in 1989. Cisco has been heavily vested in innovations, maintenance, and adoption of the protocol suite ever since and, over the years, has helped design and added various capabilities to its toolset. BGP forms the core routing protocol of many service provider networks, primarily because of its ability to have a policy-based routing approach. BGP and its routes are installed in the Routing Information Base (RIB) within the network devices of the Fabric. Updates are provided by the protocol to a full mesh of BGP nodes in a PUSH-type fashion. While they can be controlled via policy, by default, all routes are typically shared.

As BGP consumes space within the RIB, let’s evaluate this further, as the implications are extensive. Each device in a Dual-Stack network (IPv4 and IPv6 enabled) utilizes two entries for IPv4 networks, the MAC Address and the IPv4 address as its network prefix.  This is effectively 1 network prefix with 2 EID for each endpoint in IPv4. Similarly, in IPv6, each EID would have a Link-Local address, a host address, and a multicast type address entry similar to the network prefix. Each IPv6 address consumes 2 entries per address, and thus we have another 4 entries per endpoint, all of which would be needed within the RIB on all BGP-enabled nodes within the Fabric as it’s a full mesh design. Additionally, the routing protocol must maintain those adjacencies and update each peer as endpoints traverse the Fabric. Due to the processing required in the BGP control plane on every update, there is a higher need for CPU and memory resources as the EID entries change or move within the Fabric.

Figure 3. BGP Protocol

In the figure above, you will see that utilizing BGP as the control plane requires that the edge device first maintain routing adjacencies, process updates using its algorithm, then install the update in the Forwarding Information Database (FIB) within the CEF table.

Most Access switches or within Fabrics called Edge Nodes have smaller RIB capabilities than the cores they peer with. Typically you will see 32000 entries available on most of the current lines of switching for Edge Nodes. This is quickly consumed by the number of addresses per endpoint, leaving you room for fewer devices if we were not to employ policies and filtering. Thus to accommodate scale, we would need policy, which means we need to modify BGP for its use in a Fabric. As devices roam throughout the network, it is important to understand that updates for each device will be propagated by BGP to every node within that full mesh network.  If we were to use our DNS analogy for each roaming event instead of a specific DNS query we force a DNS Zone Transfer.

Another approach is to end the BGP routing at the larger, more powerful core and distribution switches and resort to layer 2 trunks below. Here we would utilize STP, which has slightly slower convergence times in the event of link failures, but all of which can be tuned, but then the network has less reliability and high availability when compared to other solutions. As soon as we need to rely on those Layer 2 protocols, our Fabric has diminished benefits, and we have not achieved the goal of simplification.

Data Plane

In order to forward traffic within each Overlay after sources and destinations are located is the role of the Data Plane. Traffic in Overlays utilizes encapsulation, and many forms of that have been used in various use cases from large enterprises to service provider networks the globe over. In service provider networks, a typical encapsulation is Multi-Protocol Label Switching (MPLS) which encapsulates each packet and utilizes a labeling method to segment traffic. The labeling in MPLS networks was later modified to simplify convergence issues through the use of Segment Identifiers (SID) for Segment Routing. These had several advantages in convergence over the LDP learned labels. Segment Identifiers (SID) were propagated within IGP routing updates of both OSPF and ISIS. This was far superior to the hop-by-hop convergence of LDP, which converged after the IGP came up and was known to cause issues.

Figure 4. MPLS Header Explained

We typically utilize Virtual Extensible LAN (VXLAN) in enterprise networks within Fabrics. VXLAN is an encapsulation protocol for tunneling data packets to transport original data packets, unchanged, across the network. This protocol-in-protocol approach has been used for decades to allow lower-layer or same-layer protocols (from the OSI model) to be carried through tunnels creating Overlay like pseudowires used in xConnect.

VXLAN is a MAC-in-IP encapsulation method.  It provides a way to carry lower-layer data across the higher Layer 3 infrastructure.  Unlike routing protocol tunneling methods, VXLAN preserves the original Ethernet header from the original frame sent from the endpoint.  This allows for the creation of an Overlay at Layer 2 and at Layer 3, depending on the needs of the original communication.  For example, Wireless LAN communication (IEEE 802.11) uses Layer 2 datagram information (MAC Addresses) to make bridging decisions without a direct need for Layer 3 forwarding logic.

Figure 5. Fabric VXLAN (VNI) Encapsulation Overhead

Any encapsulation method is going to create additional MTU (maximum transmission unit) overhead on the original packet.  As shown in figure 5 above, VXLAN encapsulation uses a UDP transport.  Along with the VXLAN and UDP headers used to encapsulate the original packet, an outer IP and Ethernet header are necessary to forward the packet across the wire.  At a minimum, these extra headers add 50 bytes of overhead to the original packet.

Cisco SD-Access and VXLAN

Cisco SD-Access places additional information in the Fabric VXLAN header, including alternative forwarding attributes that can be used to make policy decisions by identifying each Overlay network using a VXLAN network identifier (VNI).  Layer 2 Overlays are identified with a VLAN to VNI correlation (L2 VNI), and Layer 3 Overlays are identified with a VRF to VNI correlation (L3 VNI).

Figure 6. Fabric VXLAN Alternative Forwarding Attributes

As you may recall, Cisco TrustSec decoupled access that is based strictly on IP addresses and VLANs by using logical groupings in a method known as Group-Based Access Control (GBAC).  The goal of Cisco TrustSec technology was to assign an SGT value to the packet at its ingress point into the network.  An access policy elsewhere in the network is then enforced based on this tag information. As an SGT is a form of metadata and is a 16-bit value assigned by ISE in an authorization policy when a user, device, or application connects to the network, we can encode (SGT value and VRF values) into the header and carry them across the Overlay. Carrying the SGT within the VXLAN header allows us to utilize it for egress enforcement anywhere in the network and provides Micro and Macro Segmentation capability.

Figure 7. VXLAN-GBP Header

Cisco SD-Access Fabric uses the VXLAN data plane to transport the full original Layer 2 frame and uses LISP as the control plane to resolve endpoint-to-location (EID-to-RLOC) mappings. Cisco SD-Access Fabric replaces sixteen (16) of the reserved bits in the VXLAN header to transport up to 64,000 SGTs using a modified VXLAN-GPO, sometimes called VXLAN-GBP which is backward compatible with RFC 7348.

BGP-EVPN and VXLAN

VXLAN is defined in RFC 7348 as a way to Overlay a Layer 2 network on top of a Layer 3 network. Each Overlay network is called a VXLAN segment and is identified using a 24-bit VXLAN network identifier, which supports up to 16 million VXLAN segments. Without the Cisco modifications to VXLAN, the IETF format would not support SGTs within the header, which would preclude the use of egress enforcement and Micro-Segmentation without forwarding the packet to an enforcement device like a firewall (router on a stick) or deploying downloadable ACL, which add additional load to the TCAM.

Figure 8. IETF VXLAN Header

Fabric Benefits


When we start to review the various benefits of one Fabric design over the other, there are capabilities that differentiate them. Each Fabric design has something to offer and plays to its strengths. It’s important to clearly understand what benefit you can have from a technology and what the technology solves for you. In this section, we will look at what problems can be solved with each design.

Deploying a Fabric architecture provides the following advantages:

◉ Scalability — VXLAN provides Layer 2 connectivity, allowing for infrastructure that can scale to 16 million tenant networks. It overcomes the 4094-segment limitation of VLANs. This is necessary to address today’s multi-tenant cloud requirements.

◉ Flexibility — VXLAN allows workloads to be placed anywhere, along with the traffic separation required, in a multi-tenant environment. The traffic separation is done by network segmentation using VXLAN segment IDs or VXLAN network identifiers (VNIs). Workloads for a tenant can be distributed across different physical devices, but they are identified by their respective Layer 2 VNI or Layer 3 
VNI.

◉ Mobility — IP Mobility within the Fabric and IP address reuse across the Fabric.

◉ Automation — Various methods may be used to automate and orchestrate the Fabric deployment from a purpose-built controller to Ansible, NSO, and Terraform, thereby alleviating some of the problems with error-prone manual configuration.

Cisco SD-Access

This Fabric technology has many additional benefits that come with its deployment. Cisco SD-Access is built on an Intent-based Networking foundation that encompasses visibility, automation, security, and simplification. Using Cisco DNA Center automation and orchestration, network administrators can implement changes across the entire enterprise environment through an intuitive, GUI-based interface. Using that same controller, they can build enterprise-wide Fabric architectures, classify endpoints for security grouping, create and distribute security policies, and monitor network performance and availability.

SD-Access secures the network at the macro- and micro-segmentation level using Virtual Routing and Forwarding (VRFs) tables and Security Group Tags (SGTs), respectively. This is called Multi-Tier Segmentation, which is not optimal in traditional networks. This segmentation happens at the access port level. This means the security boundary is pushed to the very edge of the network infrastructure for both wired and wireless clients.

With Multi-Tier Segmentation, network administrators no longer have to undertake configurations in anticipation of a user or device move, as all of the security contexts associated with a user or device are dynamically assigned when they authenticate their network connection. Cisco SD-Access provides the same security policy capabilities whether the user or device is attached via a wired or wireless medium, so secure policy consistency is maintained as the user or device changes the attachment type.

Instead of relying on IP-Based security rules as in a traditional network, Cisco SD-Access relies on centralized group-based security rules utilizing SGTs that are IP-address agnostic. As a user or device moves from location to location and changes IP addresses, their security policy will remain the same as their group membership is unchanged regardless of where they access the network. This reduces pressure on network administrators since they do not have to create as many rules or manually update them on different devices. This, in turn, leads to a more dynamic, scaleable, and stable environment for network consumers without reliance on older technologies like PVLANs or constraints of introducing a bottleneck for enforcement.

How can a network be both dynamic and stable at the same time? When a rule does have to be created or changed, it can be done for all users of a group in the Cisco DNA Center. Those rules are then dynamically populated to all relevant network devices that need that rule, ensuring both accuracy and speed for the update. Additionally, wired and wireless network devices may be managed from one automation and orchestration manager, allowing the same rules, policies, and forwarding methods to be adopted across the entire network. With the addition of PxGrid integrations with ISE, the security policies can be adopted by almost any security-enabled platform to dramatically simplify policy enforcement and manageability problems surrounding maintaining ACLs.

When we analyze the solution more deeply and are objective, it is important to understand how the control plane functions and what the ultimate limitations might be of any technology. When a MAC move occurs, and an endpoint (or host) has moved from one port to another. The new port may be within the same edge node, or in a different edge node, in the same VLAN. Each edge node has a LISP control-plane session with all control plane nodes. After an endpoint is detected by the edge node, it is added to a local database called the EID table.  Once the host is added to this local database, the edge node also issues a LISP map-register message to inform the control plane node of the endpoint, so the central HTDB is updated. A host may move several times, so each time a move occurs, the HTDB is updated.

Thus there is never a case where the Fabric has the same entry on two edge nodes because this HTDB is utilized as a reference point for Endpoint Tracking when packets are forwarded. Each register message from the edge node includes an EID-RLOC entry for the endpoint, which is a combination of an Endpoint IDentifier (EID) to Resource LOCator (RLOC) mapping. Within LISP, edge nodes would have a management IP or RLOC to identify them individually. As a result, when an edge node receives a packet, it checks its local database for an EID-RLOC entry. If the EID-RLOC entry does not exist, a query is sent to the LISP control plane so the EID may be resolved to the RLOC. This EID-RLOC entry is the mapping of an RLOC to an Endpoint Identifier. Packets and frames received from the endpoint, either directly connected to an edge node or through it by way of an extended node or access point, are encapsulated in Fabric VXLAN and forwarded across the Overlay.  Traffic is sent to another edge node or the border node, depending on the destination. When Fabric encapsulated traffic is received for the endpoint, such as from a border node or another edge node, it is de-encapsulated and sent to that endpoint.  This encapsulation and de-encapsulation of traffic enable the location of an endpoint to change, as the traffic can be encapsulated towards different edge nodes in the network without the endpoint having to change its address. Additionally, the local database on the receiving edge node is automatically updated during this conversation for the reverse traffic flow. As we mentioned, this conversational learning is precisely that. The updates occur as traffic is forwarded from one switch to another on an as-needed basis. Lastly, most customers want to simplify the management of the network infrastructure but then are looking for the “One ring to rule them all, one ring to find them, One ring to bring them all”, in some sort of Single Pane of Glass. Networking is expansive, with each vendor having its own management platform, and each comes with various capabilities. DNA Center, from a Cisco perspective, allows for the automation and orchestration of Fabrics and Traditional networks from one platform, bringing the power to all of our Enterprise Networking portfolio, but integrating with ISE, Viptela, Meraki, and externally an Ecosystem of products like DNA Spaces, ServiceNow, Infoblox, Splunk Tableau and so many more. Additionally, you can bring your own Orchestrator and orchestrate through DNA Center, which allows organizations to adopt an Infrastructure as Code methodology.

To recap, there are three primary reasons which make it superior to traditional network deployments:

◉ Complexity reduction and operational consistency through orchestration and automation
◉ Multi-Tier Segmentation which includes group-based policies, and partitioning at Layer 2 and Layer 3.
◉ Dynamic policy mobility for wired and wireless clients
◉ IP subnet pool conservation across the SD-Access Fabric.

BGP-EVPN

BGP EVPN VXLAN can be used as a Fabric technology in a campus network with Cisco Catalyst 9000 Series Switches running Cisco IOS XE software. This solution is a result of proposed IETF standards and Internet drafts submitted by the BGP Enabled ServicesS (bess1) workgroup. It is designed to provide a unified Overlay network solution and also address the challenges and drawbacks of existing technologies proposed BGP to carry Layer 2 MAC and Layer 3 IP information simultaneously. BGP incorporates Network Layer Reachability Information (NLRI) to achieve this. With MAC and IP information available together for forwarding decisions, routing and switching within a network are optimized. This also minimizes the use of the conventional “flood and learn” mechanism used by VXLAN and allows for scalability in the Fabric. EVPN is the extension that allows BGP to transport Layer 2 MAC and Layer 3 IP information. This deployment is called a BGP EVPN VXLAN Fabric (also referred to as VXLAN fabric).

This solution would provide a Fabric comprised of Industry standards-based protocols, which provided a unified Fabric across Campus and Data Centers. Additionally, this Fabric would be interoperable with 3rd party devices in that it would allow for multi-vendor support and, at the same time, be Brownfield-friendly. Additionally, it would allow for rich multicast support with Tennant Routed Multicast and both L2 and L3 support.

This solution also may be deployed and managed by various automation and orchestration methods, from Ansible, Terraform, and Cisco’s NSO platform. While these platforms do offer robust automation and orchestration methods, they do not have the monitoring capability to look at model-driven telemetry. Additionally, they do not tie the richness of Artificial Intelligence and Machine Learning into the solution for help with Day N operations like troubleshooting and faultfinding, and visibility into both the user and application experience requires a separate platform. This often means standing up a separate platform for some sort of visibility, but they are separate and not combined.

When we analyze the solution more deeply and are objective it is important to understand how the control plane functions and what the ultimate limitations might be of any technology. When a MAC move occurs, and an endpoint (or host) moves from one port to another. The new port may be within the same VTEP, or in a different VTEP, in the same VLAN. The BGP EVPN control plane resolves such moves by advertising MAC routes (EVPN route type 2). When an endpoint’s MAC address is learned on a new port, the new VTEP it is in advertises (on the BGP EVPN control plane) that it is the local VTEP for the host. All other VTEPs receive the new MAC route. A host may move several times, causing the corresponding VTEPs to advertise as many MAC-based routes. There may also be a delay between the time a new MAC route is advertised and when the old route is withdrawn from the route tables of other VTEPs, resulting in two locations briefly having the same MAC route. Here, a MAC mobility sequence number helps decide the most current of the MAC routes. When the host MAC address is learned for the first time, the MAC mobility sequence number is set to zero. The value zero indicates that the MAC address has not had a mobility event, and the host is still at the original location. If a MAC mobility event is detected, a new Route type 2 (MAC or IP advertisement) is added to the BGP EVPN control plane by the new VTEP below which the endpoint moved (its new location). Every time the host moves, the VTEP that detects its new location increments the sequence number by 1 and then advertises the MAC route for that host on the BGP EVPN control plane. On receiving the MAC route at the old location (VTEP), the old VTEP withdraws the old route. A case may arise in which the same MAC address is simultaneously learned on two different ports. The EVPN control plane detects this condition and alerts the user that there is a duplicate MAC. The duplicate MAC condition may be cleared either by manual intervention, or automatically when the MAC address ages out on one of the ports. BGP EVPN supports IP mobility in a similar manner to the way it supports MAC mobility. The principal difference is that an IP move is detected when the IP address is learned on a different MAC address, regardless of whether it was learned on the same port or a different port. A duplicate IP address is detected when the same IP address is simultaneously learned on two different MAC addresses, and the user is alerted when this occurs. The number of entries is a bit of a concern primarily because as we start to deal with mobility, and as endpoints move around the network, these prefixes being learned and withdrawn puts a strain on the network from a churn perspective. As this occurs, the upper protocols must converge, and as that happens, CPUs can hit their limits. It’s important to understand the scope of the number of endpoints within the network and accommodate this in the design accordingly, especially when dealing with dual-stack networks utilizing IPv4 and IPv6. Additionally, the design must consider, especially for the routed access approach, the number of entries on the access switches and the performance impact thousands of wireless devices moving across the network might have. The last implication of withdrawing routes by sequence number is that it takes time for convergence; this should not be underestimated. Segmentation is provided by Private VLANs. A private VLAN (PVLAN) divides a regular VLAN into logical partitions, allowing limited broadcast boundaries among selected port groups on a single Layer 2 Ethernet switch. The single Ethernet switch’s PVLAN capabilities can be extended over the BGP EVPN VXLAN, enabling the network to build a partitioned bridge domain between port groups across multiple Ethernet switches in the BGP EVPN VXLAN VTEP mode. The integration of PVLAN with a BGP EVPN VXLAN network enables the following benefits:

◉ Micro-segmented Layer 2 network segregation across one or more BGP EVPN VXLAN switches.

◉ Partitioned and secured user-group Layer 2 network that limits communication with dynamic or static port configuration assignments.

◉ IP subnet pool conservation across BGP EVPN VXLAN network while extending segregated Layer 2 network across the Fabric.

◉ Conservation of Layer 2 Overlay tunnels and peer networks with a single virtual network identifier (VNI) mapped to Primary VLAN.

Source: cisco.com