Thursday, 27 May 2021

Application Aware Networking with Cisco SD-WAN

Part 1: Seamless Connectivity From Anywhere

Today’s applications are no longer confined behind the walls of data centers. They are migrating to the cloud and being offered as Software-as-a-Service (SaaS). These applications are built on a distributed architecture model and run across different systems. This puts new demands on the network to become more application aware, programmable, and intelligent to deliver the best application experience to any user on any device, anywhere.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

In a Multicloud world, networks must become more application aware, programmable, and intelligent.
To be effective, modern networks must provide:

1. Dynamic, Seamless Connectivity from Anywhere
2. Programmable Functionality for optimal Quality of Experience
3. Adaptive Responsiveness to Application Health & Needs

Part 1 of this blog series will focus on providing seamless connectivity from anywhere. In part 2, we will explore how applications can “program” and cooperatively work with the network to provide a higher quality of experience by leveraging telemetry and other network health insights.

Cloud Networking


Enterprises are rapidly adopting a multi-cloud network architecture where applications migrate from datacenters to hybrid clouds based on specific organizational needs. To achieve this, they are seeking new solutions that facilitate the implementation and orchestration of a secured and optimized hybrid multi-cloud network that is independent of the physical topology. Ultimately, they want maximum flexibility with minimal disruption for as little effort as possible.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The variety of technologies used across clouds makes cloud interconnection quite challenging.

The variety of technologies used across clouds makes cloud interconnection quite challenging.
With public cloud adoption moving at an unprecedented pace, cloud service providers (CSP) attract customer workloads and applications onto their clouds through differentiated services and capabilities. The variety of technologies used across clouds makes cloud interconnection quite challenging as cloud networking architects must constantly learn new techniques to leverage each CSP’s specific networking features and functions. To simplify management, network professionals need products and solutions that can abstract these cloud networking differences through a single orchestration layer that bridges the different cloud systems.

SaaS


SaaS applications by nature are elastic and can serve users from anywhere. The modern network must not only provide connectivity to the sites where SaaS applications are hosted but be capable of dynamically selecting the site that will deliver the optimal SaaS experience to the user as well.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

Application-Driven SD-WAN for Optimized Office 365 Experience

Cisco SD-WAN the Bridge to the Cloud


Cisco enables enterprises to take advantage of SD-WAN “overlay” networks to seamlessly connect users, machines, and applications across clouds and data centers. Moreover, our SD-WAN solution is fully aware of SaaS applications and provides an optimal path to them by programming the network with best path selection and adjusting it based on a combination of application and network telemetry.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Cisco SD-WAN with Azure Cloud

In partnership with Amazon AWS, Microsoft Azure, and Google GCP, we are enabling and delivering a superior application experience to securely connect users and devices to any cloud from anywhere.  By leveraging the inherent programmability of our SD-WAN fabric and that of our cloud partners, we have created a simplified yet highly consistent workflow that connects two endpoints while abstracting the differences of each underlying cloud.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Intent-Based SD-WAN Policy with Google Service Directory

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Automated site to AWS cloud connectivity including Transit Gateway orchestration using Cisco SD-WAN vManage including Security Policy and network segmentation. Full visibility into inter-regional transit traffic and telemetry with TGW Network Manager.

In partnership with Megaport and Equinix, we are capable of establishing high-bandwidth connectivity to virtual private clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Establishing high-bandwidth connectivity to virtual private clouds with Megaport and Equinix.

This is achieved in a matter of minutes via Direct Connect and Express route:

◉ Automatic discovery of the VPC/VNET and other cloud assets is based on the account credentials of the user.

◉ The user selects which private network and cloud network need to be connected. The automation engine takes care of all system activities, including:

     ◉ Automated deployment of the required gateways in the cloud and partner infrastructure to build the fabric.
     ◉ Provisioning of Cloud Gateways (TGW, Virtual Network Gateway) and setup of the route peering relationship between the SDWAN fabric and the cloud network.
     ◉ Programming of the desired policies, including security across cloud and customer sites.

◉ The system automatically starts monitoring the new network setup and its components. It also extends assurance capabilities to the network admin for an optimal user experience and network connectivity.

The figure below shows the high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco SD-WAN: The Bridge to SaaS


Access to applications is frequently over the public Internet. This can cause significant variation in the user experience based on their location and the specific site serving the SaaS application. Cisco SD-WAN has additional intelligence that enables it to discover all the different sites that serve a SaaS application. This enables the system to identify the optimal path to these sites based on the user location. Path selection is dynamic and takes into account the current status of each site’s connectivity. While direct internet access often provides the best experience, if the closest site is saturated or overloaded, then the system can select an alternate site to more efficiently serve the application to the user over the SD-WAN fabric.

To provide optimal connectivity and the best application experience possible in our Cloud and SaaS era, a flexible, intelligent, and programmable network is required. The network plays a crucial role in providing the bridge to a delightful application experience that business professionals and consumers have been accustomed to and are now demanding 100% of the time.

In this blog we have discussed how a flexible and programmable network provides seamless connectivity to the hybrid cloud and to SaaS. In addition, we have shared how we have solved the challenges of network disparity across clouds by providing key integration, abstraction, and automation of cloud and networks via Cisco SD-WAN. In the next blog in this series, we will explore Quality of Experience and how it can be best served by an intelligent SD-WAN network.

Source: cisco.com

Tuesday, 25 May 2021

The Best 3 New Tools We’ve Added to the DevNet Tech Stack

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning

Cisco DevNet is the platform on which we host Cisco’s constantly-updated API documentation, 1,100+ Code Exchange entries, 54 DevNet Sandboxes, and 740 Learning Labs. Here at DevNet, we believe in practicing what we teach. So our platform is built on an evolving, modern hybrid infrastructure. We’ve learned a lot in the building, and refining, of this ambitious collection of services.

DevNet is built on over 100 on-prem and cloud-based microservices, updated in real-time by a team of engineers, designers, and content creators.  Of the tools we use to run this site, there are a few new ones that are proving extremely useful. Here are the top 3 new tools we use, and some interesting things we’ve learned about them.

It was not always so spiffy

As I said, DevNet is based on an evolving architecture. But when we launched the resource, in 2014, it was all hosted in virtual machines on a server in a data center. That system ran out of headroom quickly during our growth years. It wasn’t scaling to keep up with user growth, and it was increasingly hard for us to add features to the service.

In 2017, we moved DevNet over to a cloud-hosted microservice architecture using Kubernetes. We now use Prometheus, Elastic Search, Kibana, Grafana, and other cloud-native platforms. With the flexibility inherent in this setup, and using a DevOps development model, we have found it much easier to add new services and applications to DevNet, as well as to add capacity to keep up with increasing user demand.

Recently, we’ve been driving to enhance application performance and user experience on DevNet. That means working with performance monitoring, full-stack observability, and other solutions across the stack. So I want to focus on tools that make this easier, and what we’ve learned from them.

ThousandEyes

Developer.cisco.com serves a global community, and the performance and experience can vary a lot based on where the users are. Anyone who is responsible for a large-scale global deployment knows that localized performance issues can occur due to service outages, localized connectivity issues, or countless other issues. We use ThousandEyes Cloud Agents, which give us visibility into the underlying network paths our applications use, from our users’ endpoints to our hosted applications in the cloud. Cloud Agents allow us to understand the user experience in specific global regions.

It’s important to us that ThousandEyes’ “synthetic” testing is not at all theoretical. It’s based on real-time internet monitoring and is extremely valuable to us.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
ThousandEyes Path Visualization View for DevNet. This shows the network paths discovered when ThousandEyes agents visit the DevNet application page from different locations around the world.

We also use ThousandEyes synthetic transaction tests to automate UI testing, a process that would otherwise be repetitive and manual. ThousandEyes has a Recorder and IDE we can use to record users’ browser interactions, which then creates Transaction tests that integrate into a CI/CD pipeline.  As we push changes into our repo, ThousandEyes can execute tests continuously. And this continuous UI testing happens in the global regions users connect from.

ThousandEyes solves very practical problems for us and we continue to see new ways to apply it. For example, ThousandEyes alerts can be pushed into the AppDynamics interface. This consolidates network and application performance issues into one pane of glass, allowing NetOps and AppOps to break silos in quickly identifying the root cause of issues.

AppDynamics (“AppD”)


To optimize user experience, we need to know where our applications can be performance-optimized. Previously, we were using Prometheus with a custom Grafana dashboard to monitor our Kubernetes cluster – including each microservice’s usage of CPU, Memory, and other resources. But this combination did not provide the performance data that we really needed to get insight into our applications to improve them.

Today, with Cisco AppDynamics, we get extremely granular and valuable data, like the response time from individual API calls, or the time for full business transactions that employ microservices  and databases – for example, delivering educational content that calls our Learning Lab services, our video system, our quiz engine, our progress tracking microservice and our MySQL server. AppD gives us monitoring from the end user experience of our applications down into the infrastructure in the data center.

This level of intelligence is incredibly valuable. It gives us the opportunity to optimize the responsiveness of user experience by focusing our efforts in precisely the right places.

AppD offers different ways to meter and monitor various parts of the application stack. For example, we use its Javascript agent to monitor the browser application UI. We also use AppD’s SDK for Go for most of our microservices, which are written in Golang. We use a few different database services, like Cassandra and MongoDB; AppD has agents for those processes.

With this suite of monitoring functions, we can monitor end-to-end application performance, which is incredibly valuable to us. It helps us pinpoint performance issues quickly, and makes it easier to create new high-performing services.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
DevNet Application Performance Monitoring Using AppD. This diagram shows how DevNet leverages AppD for full performance monitoring from the frontend, to the backend and database.

Istio Service Mesh


Last year, we started tested Istio in our non-production environment. We have been using it to manage the microservices our platform is built on. We found its benefits compelling: it helps us manage, balance, and troubleshoot our increasingly complex web of software.

However, it also takes time for people to learn to use this tool. And it’s a high-stakes game, as misconfigurations in a service mesh can have profound impacts on networks and applications.

But we have found that it is worth the time investment in learning to use the tool. Istio has become one of our key diagnostic and management tools. We use it to analyze the impact on inter-service traffic from what can be subtle changes in microservice configurations. It allows us to apply automation to shift workloads when needed – securely and without having to rebuild the connections between services.

Istio also allows us to configure policy-based access control across microservices, again under programmatic control.

There is one pilot in particular that has a lot of promise: We are using Istio in A/B testing, to dynamically divide traffic in various proportions between stable and new apps and configurations. It makes it configuring and running tests much easier and efficient, and thus is helping us improve the pace of our application development. It frees up engineers so they can get to work on other projects more quickly.

Source: cisco.com

Saturday, 22 May 2021

Choosing the Best Overlay Routing Architecture for EVPN

A Comparative Analysis of Architectures and Their Trade-offs

Organizations have many options when implementing an Ethernet Virtual Private Network (EVPN). Choice of network design with respect to overlay routing may be influenced by a combination of factors, including scalability, multi-tenancy, segmentation, and operational simplicity. Understanding the key differences among various overlay routing architectures makes it possible to evaluate and choose an implementation that offers a best fit for an organization.

Read More: 200-901: Developing Applications and Automating Workflows using Cisco Core Platforms (DEVASC)

This blog post compares the trade-offs across different overlay Anycast routing architectures in the context of overlay networks deployed using EVPN Integrated Routing and Bridging. It covers Centralized Routing, Distributed Asymmetric Routing, and Distributed Symmetric Routing. Note that this discussion is independent of data plane encapsulation and applies equally to IP and MPLS tunnel data paths.

Overlay Networks

Overlay networks have become ubiquitous across enterprise, data center, and service provider network architectures. They enable deployment of a simple non-blocking IP routed infrastructure with the flexibility to deploy multi-tenant unicast and multicast services on top. Overlay endpoints or workloads may be placed or moved anywhere across a non-blocking fabric, independent of overlay addressing and subnet assignments. A flexible and scalable IP Clos fabric provides reachability across edge and border devices. A VPN tunnel mesh across edge and border devices provides overlay connectivity between  connected endpoints (see Figure 1).


Figure 1: VPN overlay with simple non-blocking routing infrastructure.

There may be additional factors, including security and traffic engineering policies, to consider when deploying an overlay across different use cases. Reachability, however, is the least common denominator across all overlay use cases. For flexible workload placement and mobility that is independent of addressing and subnetting constraints, a multi-tenant overlay network must provide reachability across:

◉ Tenant endpoints within an IP subnet,
◉ Tenant endpoints in different IP subnets.

As intra-subnet overlay connectivity is enabled via layer 2 VPN bridging services deployed across fabric edge and optionally border devices, multiple options exist for overlay routed connectivity between endpoints in different subnets. The following will detail and compare trade-offs across three overlay Anycast routing architectures:

1. Centralized Routing
2. Distributed Asymmetric Routing
3. Distributed Symmetric Routing

1 – Centralized Anycast Routing Architecture


A centralized routing model connects endpoints to layer-2 EVPN gateways (GW) that provide VPN bridging. This enables intra-subnet flows across the overlay while all routing to endpoints in different subnets, within and outside the fabric, is centralized via designated Integrated Routing and Bridging (IRB) L2+L3 GWs.

First-hop routing for each overlay subnet is deployed using a subnet Anycast GW that is hosted on one or more designated IRB GW nodes. A key attribute defining this overlay routing architecture is that first-hop routing function for an overlay subnet is decoupled from the EVPN L2-GW edge that provides intra-subnet bridging service for that subnet. This decoupling results in first-hop routing for overlay endpoints across the fabric being “centralized” on designated IRB nodes. Note that the Anycast GW for each subnet is still distributed across these “centralized” IRB GW nodes.

It is common to deploy first-hop Anycast routing for all overlay subnets in a fabric on the same set of IRB nodes. While not necessarily required, this is often done for operational simplicity and optimal routing. It is also common for this first-hop routing function to be hosted on border nodes that also act as interconnect GWs to external L2 or L2/L3 domains. Optionally, these IRB nodes may also function as edge nodes and connect to local overlay endpoints, resulting in the model shown in Figure 2.


Figure 2: EVPN Centralized Routing Deployment Model

Control Plane Operation

A centralized approach essentially uses an EVPN overlay as a layer-2 VPN overlay, with the inclusion of the host IP along with the host MAC being optional in EVPN host route advertisements (see Figure 3). The host route is advertised by the egress L2 GW with layer 2 attributes that, on the ingress L2 GW and on the centralized IRB GW, result in:

◉ Import of the host MAC to the MAC VRF in the control plane.
◉ Host MAC reachability via layer-2 VPN encapsulation and tunnel to the egress GW.


Figure 3: Control plane operation with centralized routing.

In addition, IRB GW nodes also install layer-3 adjacencies to the remote host IP.  Host IP to MAC bindings for this purpose may be learnt on the IRB GW via:

◉ Advertising L2 GW learning the host IP by snooping and including the host IP in the EVPN host route advertisement.
◉ OR in data plane via ARP and ND packets received from the host.

Note that reachability to a remote layer-3 host adjacency is still resolved by host MAC reachability via a layer-2 VPN tunnel to the egress GW. In addition, IRB gateways may also proactively advertise the Anycast GW MAC/IP in the EVPN control plane for the purpose of avoiding duplicate ARP responses from redundant Anycast GWs. On the L2 GW, this results in L2 reachability to Anycast GW MACs in the MAC VRF, and local ARP suppression for Anycast GW IP ARP requests from hosts.

Data Plane Operation

For data plane operation (see Figure 4), intra-subnet flow destined to a remote host is bridged on the ingress L2 GW via a tunnel to the egress L2 GW, with the layer 2 VPN encapsulation advertised by the egress L2 GW. On the egress L2 GW, this layer 2 VPN encapsulation maps to a MAC VRF, where the packet is again bridged to the local host.

Inter-subnet flow destined to Anycast GW MAC is bridged on the ingress L2 GW to one of the centralized IRB GW via tunnel to the IRB GW with layer 2 VPN encapsulation advertised by the IRB GW. Packets are then routed on the IRB GW via layer-3 adjacency to the destination host IP. This results in the packet being encapsulated with the host MAC rewrite that resolves via tunnel to the egress L2 GW and layer 2 VPN encapsulation advertised by the egress L2 GW. On the egress GW, this layer 2 VPN encapsulation maps to the MAC VRF, where the packet is again bridged to the local host.


Figure 4: Data plane operation, showing intra-subnet flow and inter-subnet flow with centralized routing

Control Plane Scalability – Limited by “all subnets on centralized GWs”

Control plane scalability is limited by the fact that each IRB node that is part of the centralized Anycast GW cluster is required to program:

◉ Layer-3 (SVI) interfaces for ALL overlay subnets for which it is a first-hop GW.

◉ Layer-3 adjacencies to ALL overlay endpoints in these subnets.

◉ MAC VRFs for ALL overlay subnets for which it is a first-hop GW.

◉ MAC routes for ALL overlay endpoints in these subnets.

◉ IP host routes for ALL overlay endpoints across the fabric.

◉ Overlay tunnels to ALL edge nodes.

A simple deployment centralizes all overlay subnets on the same set of IRB nodes. In this case, the fabric wide scale of overlay subnets and endpoints is limited by the IRB device’s individual layer 3 interface, layer adjacency, and MAC route scale. Note that in this model, redundant nodes that are part of the same Anycast GW cluster do not contribute to overall fabric scale, since the same forwarding state needs to be replicated across all Anycast GW nodes.

Control Plane Scalability – At the cost of optimal routing

Alternatively, first-hop routing service for different subnets may be load-shared across multiple centralized Anycast GW clusters to reduce the scale on each IRB node.


Figure 5: First-hop routing for subnets hosted on different IRB nodes.

Figure 5 shows first-hop routing for two subnets hosted on the first two IRB nodes with routing for two other subnets hosted on the other two IRB nodes. However, this may result in a sub-optimal data path with an extra routing hop as shown in Figure 6. It also compromises the operational simplicity of being able to manage routing for all overlay subnets on the same IRB nodes.


Figure 6: This approach may introduce an extra routing hop, resulting in a sub-optimal data path that also compromises the operational simplicity of being able to manage routing for all overlay subnets on the same IRB nodes.

Sub-optimal Data Path – Local inter-subnet flows

The sub-optimal nature of inter-subnet routing in this approach applies to local inter-subnet flows that must always be bridged on the ingress L2 GW to the centralized IRB GW, only to be routed back to the ingress L2 GW. This results in a ‘traffic trombone effect’ (see Figure 7).


Figure 7: The ‘traffic trombone’ effect occurs when local inter-subnet flows that must always be bridged on the ingress L2 GW to the centralized IRB GW are routed back to the ingress L2 GW.

Operational Simplicity

Despite these sub-optimal scaling and data path properties, this approach is still a good trade-off in certain use cases for operational reasons:

◉ This approach provides operational simplicity of provisioning and managing first-hop routing and associated routing policies for all overlay subnets on designated nodes. As an example, for use cases where an overlay subnet is stretched across campus and DC domains, this approach allows you to manage inter-subnet and external routing policies for the subnet at a central point.

◉ Forwarding semantics, being similar to traditional IRB, are simple to understand, deploy, and operate.

◉ EVPN centralized routing design, in principle, aligns with legacy access/distribution layer-2 network design, where routing functionality is centralized and decoupled from layer-2 only access devices. An EVPN layer 2 overlay can be thought of as replacing a traditional layer-2 access network, with EVPN-IRB functionality on centralized distribution nodes being the traditional L2/L3 boundary. It is hence a conceptually easier transition from such legacy architectures.

Centralized Anycast GW Redundancy – just FYI

The Centralized Anycast GW approach across redundant IRB GWs introduces additional complexity that an operator should be aware of:

◉ If L2 GWs only advertise host MAC routes in the EVPN control plane, host layer-3 adjacencies are learnt on the Anycast GW via ARP and ND. Since adjacencies could be learnt on any of the redundant GWs, Anycast GWs must implement additional mechanisms to sync layer-3 host adjacencies across them. Alternatively, L2 GWs must implement MAC-IP learning via snooping and advertise the host MAC and IP via the EVPN control plane for Anycast GW nodes to learn host layer-3 adjacencies via EVPN.

◉ ARP requests for an Anycast GW IP from a host is flooded across the overlay and hence results in multiple ARP responses from redundant GWs. To avoid this, Anycast GWs must advertise the GW MAC-IP bindings upfront via the EVPN and L2 GWs must implement local ARP suppression. In the case of a VXLAN fabric, Anycast VTEP may also be used across redundant GWs to avoid multiple ARP responses.

2 – Distributed Asymmetric Routing Architecture


The distributed asymmetric approach is a variation of the centralized Anycast routing approach, with the layer 2/3 routing boundary pushed to fabric leaf nodes (see Figure 8). In this approach, first-hop Anycast GW functionality for an overlay subnet is deployed across ALL leaf nodes that now operate as IRB GWs (as opposed to being L2 GWs).


Figure 8: A Distributed Asymmetric Routing Architecture pushes the layer 2/3 routing boundary to fabric leaf nodes.

Control Plane Operation

Much like the centralized IRB approach, this approach also uses the EVPN overlay as a layer-2 VPN overlay. A slight difference is that the host IP is now required in the EVPN host route advertisement, along with the host MAC. Similar to centralized IRB operation, the host route is advertised by the egress GW with layer 2 attributes that, on the ingress GW, results in:

◉ Import of the host MAC to the MAC VRF in control plane.
◉ Host MAC reachability via layer-2 VPN encapsulation and tunnel to the egress GW.

IRB-capable nodes also install layer-3 adjacencies to the remote host IP with IP to MAC binding learnt via host routes. Reachability for remote layer-3 host adjacency is still resolved by host MAC reachability via a layer-2 VPN tunnel to the egress GW.

Data Plane Operation

While this approach enables EVPN routing and bridging functions to be co-located on EVPN leaf nodes, it has the same forwarding semantics as a centralized Anycast GW. The overlay routing function on the leaf IRB GW routes packets directly to the host’s layer-3 adjacency. “Asymmetric” in this context refers to the fact that this results in inter-subnet flows being “routed and bridged” on the ingress IRB GW and “bridged” on the egress IRB GW (Figure 9).


Figure 9: This approach is asymmetric in that inter-subnet flows are “routed and bridged” on the ingress IRB GW, and “bridged” on the egress IRB GW.

Control Plane Scalability – Limited by “all subnets everywhere”

Control plane scalability is even more severely limited by the fact that each IRB leaf node is now required to program:

◉ Layer-3 (SVI) interfaces for ALL overlay subnets in the IP VRF, even if it does not have locally attached hosts in that subnet.

◉ Layer-3 adjacencies for ALL overlay endpoints in these subnets, even if it does not have locally attached hosts in that subnet.

◉ MAC VRFs for ALL overlay subnets in the IP VRF, even if it does not have locally attached hosts in that subnet.

◉ MAC routes for ALL overlay endpoints in these subnets, even if it does not have locally attached hosts in that subnet.

◉ IP host routes for ALL overlay endpoints across the fabric in an IP VRF.

As a result, fabric wide scale of overlay subnets and endpoints is limited by each leaf device’s layer 3 interface, layer adjacency scale, and MAC route scale. Adding more GW devices to the Anycast GW cluster does not mitigate this limitation, as ALL leaf nodes host routing interfaces, layer-3 adjacencies, and MAC routes for ALL subnets and endpoints across the IP VRF.

Optimal Data Path – Local routing

In contrast to centralized IRB, local inter-subnet flows are always routed locally on the ingress GW, while inter-subnet flows across the fabric are always routed directly to the remote host (see Figure 10).


Figure 10: Local inter-subnet flows are always routed locally on the ingress GW. Inter-subnet flows across the fabric are always routed directly to the remote host.

Operational Simplicity – Traditional IRB forwarding

◉ Much like the centralized IRB approach, this approach also uses the EVPN overlay as a layer-2 overlay (akin to a traditional switching fabric). It treats remote IP endpoints as directly connected layer-3 adjacencies. Forwarding semantics, being similar to traditional IRB, are still simple to understand, deploy, and operate.

◉ Pushing the first-hop routing function to EVPN leaf GWs is a shift from traditional centralized routing designs. When migrating a legacy switching design, network designers must view EVPN fabric roles for network devices, independent from traditional access / distribution switching roles.

3 – Distributed Symmetric Routing Architecture


Much like the distributed asymmetric routing architecture, the distributed symmetric approach deploys the first hop Anycast GW function for an overlay subnet across ALL leaf nodes that operate as IRB GWs. However, for better scalability, symmetric IRB forwarding semantics and control plane operation are much different from that of asymmetric or centralized IRB that use EVPN to build a layer-2 VPN overlay. Instead of routing functionality being achieved via traditional IRB over the layer-2 overlay, the symmetric IRB approach uses EVPN as a single control plane to build:

◉ A layer-2 VPN overlay to enable intra-subnet bridging.
◉ A layer-3 VPN overlay to enable inter-subnet routing.

This additional layer-3 VPN overlay is the key differentiating attribute of a symmetric IRB architecture. It allows restriction of subnet provisioning on edge devices to locally attached subnets. This results in better scaling properties.


Figure 11: The additional layer-3 VPN overlay in a symmetric IRB architecture allows restriction of subnet provisioning on edge devices to locally attached subnets for better scaling properties.

Control Plane Operation

To build an additional layer-3 VPN overlay for inter-subnet routing, EVPN MAC+IP host routes are advertised with additional layer-3 VPN attributes to enable:

◉ Layer-3 VPN import to IP VRF in the control plane.
◉ Layer-3 VPN encapsulation in the data plane.

In summary, a single host route in the control plane is used to signal a layer-3 VPN host route to be installed in the IP VRF and a layer-2 VPN MAC route to be installed in MAC VRF, with the corresponding L3VPN and L2VPN encapsulations.

Data Plane Operation

◉ Intra-subnet bridging – Much like as is the case with the asymmetric and centralized approaches, bridging across the layer-2 VPN overlay is accomplished via layer-2 VPN encapsulation (L2 MPLS label or L2 VNI) that maps to the local MAC VRF. Bridged forwarding plane is identical across all three routing architectures.

◉ Inter-subnet routing – Inter-subnet flows are routed on the source (ingress) GW to the destination (egress) GW next-hop via a tunnel to the egress GW with L3VPN encapsulation. This L3VPN encapsulation is terminated and identifies the IP VRF at the egress GW, where the packet is again routed in the IP VRF to a locally connected endpoint. This routing data path is similar to traditional L3VPN, with the EVPN GWs acting as L3VPN PE devices.


Figure 12: Inter-subnet flows are routed on the source (ingress) GW to the destination (egress) GW next-hop via a tunnel to the egress GW with L3VPN encapsulation.

Control Plane Scalability – No more “all subnets everywhere”

A separate layer-3 VPN overlay allows inter-subnet host reachability on the source GW to be recursively resolved via a L3VPN tunnel to a destination GW next-hop. This differs from the asymmetric and centralized approaches where the source GW relies on layer-3 adjacencies to all remote hosts and their host MAC reachability via layer-2 VPN tunnels to achieve inter-subnet routing. As a result:

◉ The ingress GW no longer needs to be provisioned with routing interface (SVI) for ALL overlay subnets in an IP VRF. It only needs to be provisioned with the SVI interface for locally attached subnets.

◉ The ingress GW no longer has layer-3 adjacencies to ALL overlay endpoints in an IP VRF. It only has host routes for all end points via a tunnel to the destination GW next hop.

◉ The ingress GW no longer has MAC-VRFs for all overlay subnets in an IP VRF. It only has MAC-VRFs for locally attached subnets.

◉ The ingress GW no longer has MAC routes to ALL overlay endpoints in an IP VRF. It only has MAC routes for locally attached subnets.

◉ Ingress GW still has host routes to all endpoints in an IP VRF, unless a subnet is restricted to strictly one GW (or a multi-homing GW complex). In this case, it is possible for routing to be based on the subnet route alone.

Optimal Data Path

As in asymmetric IRB, local inter-subnet flows are always routed locally on the ingress GW, while inter-subnet flows across the fabric are always routed directly to the egress GW.

Extra TTL Decrement

Note that with this approach, an inter-subnet flow across two endpoints attached to the fabric goes via two routing hops instead of the usual single routing hop, as in traditional LANs connected via a router, or in the case of centralized and asymmetric IRB. This is not to say that the routing data path is sub-optimal. Rather, it is just an operational side effect of the packet being routed (instead of bridged) at the destination GW.

Operational Overhead – Separate L2VPN and L3VPN overlays

As opposed to centralized and asymmetric IRB architectures, the symmetric approach does result in separate layer-2 VPN and layer-3 VPN overlays to operate and manage. Together with the shift from traditional centralized routing to distributed routing across the fabric edge, this may result in a higher learning curve.

Source: cisco.com

Thursday, 20 May 2021

Monitoring your indoor IoT environment – Cisco DNA Spaces IoT Services

IoT Services Overview

Cisco DNA Spaces is the world’s most powerful location platform that uses existing Wi-Fi infrastructure to give actionable insights and drive business outcomes. Cisco DNA Spaces IoT services has been transforming how businesses measure and interact with their environment at scale. Cisco IoT services has brought hardware, software, and services together to digitize spatial data into actionable intelligence. Businesses are planning occupancy monitoring, air quality testing, contact tracing, and in-room presence use cases with Cisco DNA Spaces to prepare workspaces for a post-pandemic reopening. Enabling all these use cases require seamlessly consuming a ton of data and working with a plethora of devices. So how does an enterprise monitor the health of their IoT environment in an ocean of devices broadcasting data continuously? Enter, Cisco IoT Services Device Monitoring.

IoT Services Components

The key components of the solution are comprised of Cisco DNA Spaces IoT Services, Cisco Catalyst 9800 Series Wireless Controllers, Cisco Access Points, and our IoT Partner Ecosystem. The specific roles of each piece of the solution are described below:

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

All components in the IoT Services architecture communicate with their peers over a data channel to forward measurements and a control channel to pass actionable information. For example, in the network architecture below, Access Points communicate with the Connector over a gRPC data plane while it communicates with the Controller over a CAPWAP control plane.

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

Data Volume


The vastly scalable data plane enables DNA Spaces  IoT services to ingest and process humongous volumes of data. Partner device broadcasts are either time-driven or event-driven. Beacons, for example, can broadcast advertisements at an astonishingly high frequency while some telemetry devices can be triggered only when certain conditions are fulfilled. As a result, the per-device transmission rate varies widely from every 100ms to once in several days. On average IoT services process, more than 300 million messages per day, and data volume is increasing every day as more and more devices are being scanned.


Needle in a haystack

Analyzing the millions of packets consumed by IoT Gateways, DNA Spaces IoT Services Device monitoring identifies and predicts possible issues in the network’s IoT Infrastructure.

Device Monitoring


Network snapshot

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

IoT Services provides a snapshot to quickly identify if any Gateway is down in the environment. It also identifies the total number of transmitting IoT devices and how many devices are active currently. This quickly provides customers with an idea of how cluttered the BLE environment in the enterprise may be.

Battery monitoring

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

Unmonitored devices running out of battery is one of the primary causes of IoT network failures. It adversely affects almost all IoT use cases such as wayfinding, sensor telemetry. Devices advertising with high frequency or transmission power are particularly susceptible to battery drainage. Device monitoring provides a concise view of identifying devices that have critical or low battery life. It also provides information to locate the devices on a map so that network administrators can easily find the device and change its battery.

Active devices

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

The active devices count provides the number of devices the gateways have scanned in the last 5 mins. If there are too many active devices, it may indicate unmitigated rogue transmissions on the network. On the other hand, if there are too few active devices, it may indicate malfunctioning devices or data channel setup issues.

We are integrating more and more metrics to provide powerful insights into your IoT Network through device monitoring. In combination with network policies, device monitoring can truly transform IoT network management.

Source: cisco.com

Saturday, 15 May 2021

Wireless Security Solutions Overview

Enterprise network is undergoing digitization through multiple technological transitions that include explosion of connected and IoT devices in the network and the movement of applications and services to the cloud. Of the 29.3 billion networking devices that are forecasted to be seen in the network by 2023, 50% are expected to be IoT devices. Cloud based applications and services are expected to be accessed from multiple sites and locations, both on and off of the Enterprise network. These new trends and network transitions have not only increased the threat surface but also has advanced the sophistication of the attacks. Securing and protecting the Enterprise infrastructure has become top of the mind for network administrators and customers. With the advances and ratifications in the Wi-Fi standard, wireless has become the de facto standard of access technology in the Enterprise network. However, due to the inherent nature of the wireless networks, it becomes even more important to detect and protect not only the network infrastructure and users, but also to secure the air.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Wireless Security Solutions

At Cisco, all solutions are built with end-to-end security in mind. In this blog, we discuss some of the security initiatives and solutions that play a role in securing the Enterprise wireless networks. These components form various pieces of the security puzzle addressing over the air security, infrastructure security and provide end-to-end visibility.

Securing the Air using Rogue Management and aWIPS Solution


Rogue management and Advanced Wireless Intrusion Prevention System (aWIPS) is a wireless intrusion, threat detection and mitigation mechanism that secures the air. Rogue management solution provides protection against AP impersonation, AP Mac spoofing, Honeypot and Rogue on wire. Auto/manual containment enables the attacks to be contained before the actual damage happens. Rogue and aWIPS together provides security against denial-of-service attacks, management frame attacks and tools-based attacks to name a few.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Rogue Management and aWIPS Solution

Rogue Management and aWIPS solution architecture comprises of:

◉ Access points: Monitors the air and detects threats using advanced configurable signature-based techniques

◉ Wireless controller: Configures the access points; receives aWIPS alarms and rogue information from the AP; does rogue classification and sends consolidated report to the Cisco DNA Center

◉ Cisco DNA Center: Provides simple and intuitive dashboard for configuration and customization of aWIPS and rogue rules. Cisco DNA Center also monitors, aggregates and consolidates events and alarms in a single pane of glass

In conjunction with Cisco DNA Center, users can customize the Rogue detection and management and also fine-tune aWIPS signatures and thresholds based on individual network needs. To meet compliance requirements and also for post-analysis of the attacks happening in the network, per-signature forensic packet capture knob is provided through Cisco DNA center. DNA Center aggregates, correlates and summarizes the attacks across the managed network on the unified Rogue management and aWIPS dashboard.

Device profiling with Endpoint-Analytics


As we embark the new wave of digital transformation, more and more devices (including IoT devices) are being added to the network. A malicious user in the network is just required to find a single vulnerable entry point in the network to breach and exploit the entire network. Once that is done, the threats can spread throughout the network from device to device in a matter of seconds. Such vulnerabilities in the network mandates granular network segmentation thereby preventing the lateral spread of threats. The first step towards achieving granular network segmentation is to identify and profile the devices and endpoints that are present in the network so that segmentation and other policies can then be enforced on these groups of devices. 

Cisco AI Endpoint Analytics is the next-generation endpoint visibility solution. It gathers context from the network and enables the visibility of the endpoints to the administrator through “behavioral-based device classification”. Following techniques are used to achieve that:

◉ Deep packet inspection (DPI): uses enhanced NBAR technology to inspect and understand the applications and the protocols the endpoints are sending over the network to gather deeper context on the endpoint

◉ Machine Learning (ML): constructs for intuitive grouping of the endpoints which have common attributes 

◉ Integration with Cisco and third-party products for additional context

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Cisco AI Endpoint Analytics

Cisco’s DPI based profiling capabilities is supported in Catalyst 9800 series Wireless Controller platforms. The controller does deep packet inspection and generates telemetry data to the DNA Center to perform analytics. This enables the users to group the endpoints for segmentation and policy enforcement.

Insight into Encrypted Traffic using Encrypted Traffic Analytics 


The rapid rise in the encrypted traffic is changing the security landscape. A significant number of services and applications are using encryption as the primary method for securing information. As a result, malware detection using traditional flow monitoring techniques will no longer be feasible. Traditional way of inspecting the flows with decryption, analysis and re-encryption is not always feasible and also compromises on data integrity and privacy. Encrypted Traffic Analytics (ETA) provides insight into the encrypted traffic without decryption using passive monitoring. This insight enables us to identify and detect malware present in encrypted network connections. 

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Cisco Secure Network Analytics

Encrypted traffic analytics (ETA) extracts four data elements: Initial data packet (IDP), sequence of packet lengths and times, byte distribution and TLS-specific features. These four data elements are embedded as vendor-specific data elements in enhanced Netflow records and are sent to Cisco Secure network analytics engine for further analysis. Secure network analytics analyzes these new data elements by applying machine learning algorithms and statistical modeling techniques to pinpoint malicious patterns in the encrypted traffic and help identify threats.

Cisco Catalyst 9800 Wireless Controllers support exporting of the enhanced Netflow records to security network analytics engine for further analytics and threat detection.

Securing the Network using Umbrella 


Cisco Umbrella is a cloud delivered network security service that uses DNS queries to determine if traffic to a particular domain should be allowed, dropped or needs to be inspected further. Cisco Umbrella uses DNS to stop threats across all protocols and ports. It stops malware and attackers (in real time) if infected machines connect to the network. Cisco Umbrella uses evolving big data and data mining methods to proactively predict attacks and does category-based filtering. 

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Cisco Umbrella Solution with Catalyst 9800

Cisco 9800 Wireless Controllers and Cisco Embedded Wireless Controllers (EWC) support Umbrella network security service. WLC intercepts the DNS requests from the clients and redirects the queries to the Cisco Umbrella server in the cloud, along with the identity. The Umbrella service resolves the query and enforces category-based filtering rules on a per-identity basis. Policies can be enabled on individual SSIDs with separate policy for each SSID.

Umbrella can be enabled using DNA center dashboard and can be actively monitored using simplistic statistics dashboard. Umbrella enables customers to block threats at DNS layer and secures endpoints and users.

Nano Segmentation with User-Defined Network


Wireless being a shared network, there is no easy way for an end-user to communicate only with his devices or deterministically discover and limit access to only the devices that belong to them. This results in security concerns where in, a user knowingly or unknowingly may take control of devices that belong to other users.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Guides
Cisco User Defined Network

Cisco’s User Defined Network (UDN), enables simple, secure and remote on-boarding of wireless clients on to the shared network environment to give a personal network-like experience. UDN allows an end-user to create their own personal network, consisting of only their devices thereby enabling End-User driven network segmentation. This will help in limiting service discovery scope, realize traffic segmentation and enforcing of access control policies at a group level. This creation of nano-segments in a shared media provides a closed group. In addition, UDN’s ability to invite other trusted users into their personal network gives them the ability to collaborate and share their devices with select set of friends. 

Cisco’s UDN solution is supported on Catalyst 9800 controllers in centrally switched mode. Wireless controller along with the APs enables the containment of link-local multicast and broadcast traffic and optionally unicast traffic. This solution works with both Cisco and 3rd party switching infrastructure and for both IPv4 and IPv6 traffic. The SSID on which UDN profile can be enabled should have either MAC-filtering enabled or should be an 802.1x SSID.

Security is the first line of defense for protecting users, devices and data from malicious actors. The wireless security solutions discussed above detect and protect the users, devices, applications and data. It addition, it enables traffic segmentation along with end-to-end user and application visibility and network analytics.

Source: cisco.com