Monday 31 May 2021

Service Provider Digital Initiatives Drive Sustainable Business Value

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides

Service Provider Digital Maturity Index

In 2018, IDC developed the Service Provider Digital Maturity Index to define five levels of SP digital maturity. This index provides a roadmap to help SPs assess the progress of their digital journey versus their desired end state. The development of the Service Provider Digital Maturity Index was driven by IDC’s Service Provider Digital Readiness Survey, which analyzed the digital initiatives of 400 SPs worldwide and the business value derived from these efforts. The index measures digital maturity across seven SP domains (See Figure 1).

  SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides 

Figure 1. SP Seven Domain Model

In 2021, IDC conducted an updated study that produced a solid basis of comparison with 2018 results and provided the ability to identify where SPs have made progress and where challenges still exist, both at an overall level and within specific domains.

As SPs embarked on their digital journey, there were three key common business outcomes that all SPs were trying to achieve: improved customer experience, revenue growth/profitability, and development of new sources of revenue. The surveys conducted in 2018 and 2021 consistently show that Pioneers, which correspond to the highest level of digital maturity, enjoyed significant improvements in areas considered most strategic for SPs.

The 2021 survey results revealed that Pioneer SPs experienced the most significant business performance gains. They not only reported improved operational metrics such as reduced costs and process cycle times but importantly also reported improvements in key business outcomes such as revenue, profitability, customer satisfaction, and customer retention. Figure 2 depicts the most notable business improvements for Pioneer SPs compared to Ad-Hoc SPs, which correspond to the lower level of digital maturity.

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides
Figure 2. Pioneer SP Business Outcome Improvement


2021: The Evolution of SP Digital Maturity


In the three years since IDC developed the 2018 Service Provider Digital Maturity Index, several market dynamics have impacted SP strategies. These include an increased focus on customer experience, the SP drive to reduce costs, and increased competition on traditional and non-traditional players. These factors helped shape SPs’ digital strategies and initiatives. For the 2021 survey, we observed the following three key changes from 2018 related to SP digital transformation readiness.

1. The Role and Influence of IT

In 2018, most SPs had only a limited number of digital initiatives and had no real digital strategy.  According to the 2018 survey, 62% of organizations had developed DX (digital transformation) task teams responsible for driving individual DX projects (as there were no DX programs back then). Yet, most initiatives (76%) were driven by senior business leadership. IT primarily had a supporting role with responsibility only for implementing technologies related to DX projects. When it came to driving DX projects, IT ranked third behind business leadership and special DX organizations. In 2021, the roles for driving DX initiatives have shifted; IT has become the primary enabler (for 66% of DX initiatives), followed by specialized groups (30%) and senior business leaders (25%).

2. Shifting Business Priorities

In 2018, SPs were trying to recover from a couple of lean revenue years as demand for services shifted.  In the 2018 survey, IDC asked SPs to rank the reason why they undertook DX initiatives. Improving customer experience (#1) and driving revenue growth (#2) topped the list. Then COVID-19 happened, and SP businesses shifted their priorities. In 2021, revenue growth has dropped to #4, giving way to a focus on organizational efficiency (#1) and operational efficiency (#2). Customer experience is #3.

3. Challenges Are Less Daunting

In 2018, IDC asked respondents, “what are your top three challenges in meeting your Digital Transformation (DX) priorities?” A slight majority of SPs – 55% – replied, “our culture is too risk-averse.” SPs appear to be less risk-averse now and are committed to achieving business goals through their DX initiatives. Today’s top challenges are more structural: #1: their organizations are siloed, and #2: they do not yet have the right people/skills in-house. In 2021, SPs realize that organizational and cultural changes are needed to successfully execute their digital initiatives.

COVID-19 Impact


The COVID-19 pandemic has by far had the most significant impact on SPs’ digital strategies since 2018. The pandemic created a shift in business and consumer behavior for SPs that led to a greater dependence on secure network connectivity. With countries on lockdown and organizations worldwide shifting to a work-from-home model, SPs experienced a significant increase in demand for bandwidth for connectivity services.

IDC’s Service Provider Digital Readiness research tightly correlates digital maturity to improving business outcomes. The results of this year’s study revealed that Pioneer SPs had implemented digital technologies and created a level of business resiliency that enabled them to respond more quickly to the effects of the pandemic. According to IDC research, 73% of Pioneers were exceptionally prepared for COVID-19 compared to only 15% for all other SPs.

Source: cisco.com

Saturday 29 May 2021

Cisco Secure Firewall insertion using Cisco cAPIC in Azure

In today’s world, enterprises are undergoing a transition to innovate rapidly, keep up with the competition, and increase application agility to meet ever-changing customer demands. To achieve these goals, they often choose the hybrid cloud infrastructure approach, choosing different infrastructure environments to deploy different types of applications. Some applications are best suited for hosting on-premises, whereas others are better suited for hosting in public cloud. Thus, hybrid cloud is the new normal for many organizations. However, in a hybrid cloud environment, the challenge is to maintain a uniform enterprise operational model, comply with corporate security policies, and gain visibility across the hybrid environments.

Read More: 300-710: Securing Networks with Cisco Firepower (SNCF)

Cisco Cloud Application Centric Infrastructure (Cisco Cloud ACI) is a comprehensive solution that provides:

◉ simplified operations

◉ consistent security policy management

◉ visibility across multiple on-premises data centers and public clouds or hybrid cloud environments

◉ unified security policy for the hybrid cloud

◉ extends on-premises layer-7 security to public cloud

In an on-premises Cisco ACI data center, Cisco Application Policy Infrastructure Controller (APIC) is the single point of policy configuration and management for all the Cisco ACI switches deployed in the data center. Cisco ACI multi-site orchestrator (MSO) provides a seamless way to interconnect multiple cisco ACI data centers. MSO is a software solution representing a single point of policy orchestration and visibility across multiple geographically dispersed ACI sites.

Cisco Cloud APIC runs natively on supported public clouds to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. Cisco Cloud APIC translates all the policies received from MSO and programs them into cloud-native constructs such as VNets (Virtual Network), application security groups, network security groups, outbound rules, inbound rules, etc. This new solution brings a suite of capabilities to extend on-premises data centers into true hybrid cloud architectures, helping drive policy and operational consistency regardless of where your applications reside. Also, it provides a single point of policy orchestration across hybrid environments, operational consistency, and visibility across clouds.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 1: Cisco ACI architecture for hybrid cloud

Figure 1 above shows the overall high-level architecture of Cisco Cloud ACI with Cisco ACI Multi-Site Orchestrator acting as a central policy controller, managing policies across on-premises Cisco ACI data centers, as well as Azure environment with each cloud site being abstracted by its own Cloud APICs.

Traditional firewall integration in on-prem Data Centers


To enable scalable and manageable network security in larger data center networks, on-prem Cisco Secure Firewalls (ASA and FTD) are integrated as “unmanaged” firewall (Cisco ASAv and FTDv/NGFWv) devices into existing ACI deployments. While existing ACI contracts can be easily leveraged for enforcing security policies within a single network security zone, insertion of ASA/FTD firewalls allows for segmented workload security for inter-zone traffic, thus reducing the load on leaf ACI switches.

Hybrid Cloud


The modern data center is a hybrid ecosystem, where some applications reside in classic on-prem environments, others are hosted in public cloud environments, or are co-located in both. Cisco cloud ACI provides a uniform mechanism for data center operations, policy management, and visibility in a similar data center environment spanning multiple on-prem, cloud, and hybrid infrastructure components. To seamlessly navigate between ACI-aware data centers and cloud-native environments like AWS or Azure, the Cisco cloud application policy infrastructure controller (cAPIC) functions as a universal translator that maps ACI-specific constructs (like service graphs or contracts) into CSP-specific language (like end-point groups or VPCs).

End-point groups (EPGs) represent applications running in the cloud, on-prem or hybrid environments. Service graphs represent L4-L7 devices inserted between EPGs, with ACI contracts and filtering rules defining inter-EPG communication scope and boundaries. cAPIC avails user-defined routing (UDR) to automatically obtain network or application-centric security rules based on the specific policy configuration and contracts that apply to different EPGs. While cAPIC automatically configures the network needs of most elements in a service graph, cloud-native firewalls (like on-prem firewalls in a traditional ACI-aware data center) are considered as unmanaged entities with firewall configuration managed outside of cAPIC.

NOTE: Granular and accurate mapping between these two network policy models is crucial to ensure the correct deployment of network policies across Cisco ACI and Microsoft Azure. Figure 2 below shows how Cloud APIC handles this policy mapping.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 2: Cisco ACI Policy Model to Microsoft Azure Policy Model mapping

Securing Azure with virtual ASA and FTD solutions


Cisco validated architecture for ASAv and NGFWv insertion in Azure using Cisco cAPIC L7 insertion. The following deployment scenarios have been validated as part of this effort.

◉ Multi-node (NGFWv LB sandwich)
◉ North/South and East/West traffic flow
     ◉ Spoke to Internet (N/S)
     ◉ Spoke to Spoke (E/W)
     ◉ Inter-region Spoke to Spoke (E/W)
     ◉ Internet to Spoke (N/S)
     ◉ Multi-AZ and Multi-Hub Architecture

Use case 1: Spoke to Internet (N/S traffic flows)


Test Scenario: Traffic from the workload destined to the internet is forwarded to Azure internal load balancer (ILB). ILB load balances traffic from the consumer EPGs to the internet through multiple Cisco Secure Firewalls (NGFWv).

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 3: Spoke to Internet (N/S traffic flows)

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We have a service graph with ILB redirect to Cisco Secure Firewalls.

Traffic Flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall

Firewall receives traffic, applies security policy, and sends it out via outside interface. Outbound traffic is SNAT on the firewall.

Consumer —— > NLB [redir ] + FTD[SNAT ] ———- > Internet

Use case 2: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement


Test scenario: Traffic from consumer EPG to provider EPS is load-balanced through multiple Cisco Secure Firewalls.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 4: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We use service graph with Network load balancer redirect, Cisco Secure Firewall and Application load balancer.

Traffic flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall
◉ Firewall receives traffic, applies security policy, and sends it to ALB
◉ ALB then sends it to provide (workloads).

Consumer —— > NLB [redir ] + FTD [SNAT] ———- > [ ALB —- > Multiple Provider]

Source: cisco.com

Thursday 27 May 2021

Application Aware Networking with Cisco SD-WAN

Part 1: Seamless Connectivity From Anywhere

Today’s applications are no longer confined behind the walls of data centers. They are migrating to the cloud and being offered as Software-as-a-Service (SaaS). These applications are built on a distributed architecture model and run across different systems. This puts new demands on the network to become more application aware, programmable, and intelligent to deliver the best application experience to any user on any device, anywhere.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

In a Multicloud world, networks must become more application aware, programmable, and intelligent.
To be effective, modern networks must provide:

1. Dynamic, Seamless Connectivity from Anywhere
2. Programmable Functionality for optimal Quality of Experience
3. Adaptive Responsiveness to Application Health & Needs

Part 1 of this blog series will focus on providing seamless connectivity from anywhere. In part 2, we will explore how applications can “program” and cooperatively work with the network to provide a higher quality of experience by leveraging telemetry and other network health insights.

Cloud Networking


Enterprises are rapidly adopting a multi-cloud network architecture where applications migrate from datacenters to hybrid clouds based on specific organizational needs. To achieve this, they are seeking new solutions that facilitate the implementation and orchestration of a secured and optimized hybrid multi-cloud network that is independent of the physical topology. Ultimately, they want maximum flexibility with minimal disruption for as little effort as possible.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The variety of technologies used across clouds makes cloud interconnection quite challenging.

The variety of technologies used across clouds makes cloud interconnection quite challenging.
With public cloud adoption moving at an unprecedented pace, cloud service providers (CSP) attract customer workloads and applications onto their clouds through differentiated services and capabilities. The variety of technologies used across clouds makes cloud interconnection quite challenging as cloud networking architects must constantly learn new techniques to leverage each CSP’s specific networking features and functions. To simplify management, network professionals need products and solutions that can abstract these cloud networking differences through a single orchestration layer that bridges the different cloud systems.

SaaS


SaaS applications by nature are elastic and can serve users from anywhere. The modern network must not only provide connectivity to the sites where SaaS applications are hosted but be capable of dynamically selecting the site that will deliver the optimal SaaS experience to the user as well.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

Application-Driven SD-WAN for Optimized Office 365 Experience

Cisco SD-WAN the Bridge to the Cloud


Cisco enables enterprises to take advantage of SD-WAN “overlay” networks to seamlessly connect users, machines, and applications across clouds and data centers. Moreover, our SD-WAN solution is fully aware of SaaS applications and provides an optimal path to them by programming the network with best path selection and adjusting it based on a combination of application and network telemetry.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Cisco SD-WAN with Azure Cloud

In partnership with Amazon AWS, Microsoft Azure, and Google GCP, we are enabling and delivering a superior application experience to securely connect users and devices to any cloud from anywhere.  By leveraging the inherent programmability of our SD-WAN fabric and that of our cloud partners, we have created a simplified yet highly consistent workflow that connects two endpoints while abstracting the differences of each underlying cloud.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Intent-Based SD-WAN Policy with Google Service Directory

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Automated site to AWS cloud connectivity including Transit Gateway orchestration using Cisco SD-WAN vManage including Security Policy and network segmentation. Full visibility into inter-regional transit traffic and telemetry with TGW Network Manager.

In partnership with Megaport and Equinix, we are capable of establishing high-bandwidth connectivity to virtual private clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Establishing high-bandwidth connectivity to virtual private clouds with Megaport and Equinix.

This is achieved in a matter of minutes via Direct Connect and Express route:

◉ Automatic discovery of the VPC/VNET and other cloud assets is based on the account credentials of the user.

◉ The user selects which private network and cloud network need to be connected. The automation engine takes care of all system activities, including:

     ◉ Automated deployment of the required gateways in the cloud and partner infrastructure to build the fabric.
     ◉ Provisioning of Cloud Gateways (TGW, Virtual Network Gateway) and setup of the route peering relationship between the SDWAN fabric and the cloud network.
     ◉ Programming of the desired policies, including security across cloud and customer sites.

◉ The system automatically starts monitoring the new network setup and its components. It also extends assurance capabilities to the network admin for an optimal user experience and network connectivity.

The figure below shows the high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco SD-WAN: The Bridge to SaaS


Access to applications is frequently over the public Internet. This can cause significant variation in the user experience based on their location and the specific site serving the SaaS application. Cisco SD-WAN has additional intelligence that enables it to discover all the different sites that serve a SaaS application. This enables the system to identify the optimal path to these sites based on the user location. Path selection is dynamic and takes into account the current status of each site’s connectivity. While direct internet access often provides the best experience, if the closest site is saturated or overloaded, then the system can select an alternate site to more efficiently serve the application to the user over the SD-WAN fabric.

To provide optimal connectivity and the best application experience possible in our Cloud and SaaS era, a flexible, intelligent, and programmable network is required. The network plays a crucial role in providing the bridge to a delightful application experience that business professionals and consumers have been accustomed to and are now demanding 100% of the time.

In this blog we have discussed how a flexible and programmable network provides seamless connectivity to the hybrid cloud and to SaaS. In addition, we have shared how we have solved the challenges of network disparity across clouds by providing key integration, abstraction, and automation of cloud and networks via Cisco SD-WAN. In the next blog in this series, we will explore Quality of Experience and how it can be best served by an intelligent SD-WAN network.

Source: cisco.com

Tuesday 25 May 2021

The Best 3 New Tools We’ve Added to the DevNet Tech Stack

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning

Cisco DevNet is the platform on which we host Cisco’s constantly-updated API documentation, 1,100+ Code Exchange entries, 54 DevNet Sandboxes, and 740 Learning Labs. Here at DevNet, we believe in practicing what we teach. So our platform is built on an evolving, modern hybrid infrastructure. We’ve learned a lot in the building, and refining, of this ambitious collection of services.

DevNet is built on over 100 on-prem and cloud-based microservices, updated in real-time by a team of engineers, designers, and content creators.  Of the tools we use to run this site, there are a few new ones that are proving extremely useful. Here are the top 3 new tools we use, and some interesting things we’ve learned about them.

It was not always so spiffy

As I said, DevNet is based on an evolving architecture. But when we launched the resource, in 2014, it was all hosted in virtual machines on a server in a data center. That system ran out of headroom quickly during our growth years. It wasn’t scaling to keep up with user growth, and it was increasingly hard for us to add features to the service.

In 2017, we moved DevNet over to a cloud-hosted microservice architecture using Kubernetes. We now use Prometheus, Elastic Search, Kibana, Grafana, and other cloud-native platforms. With the flexibility inherent in this setup, and using a DevOps development model, we have found it much easier to add new services and applications to DevNet, as well as to add capacity to keep up with increasing user demand.

Recently, we’ve been driving to enhance application performance and user experience on DevNet. That means working with performance monitoring, full-stack observability, and other solutions across the stack. So I want to focus on tools that make this easier, and what we’ve learned from them.

ThousandEyes

Developer.cisco.com serves a global community, and the performance and experience can vary a lot based on where the users are. Anyone who is responsible for a large-scale global deployment knows that localized performance issues can occur due to service outages, localized connectivity issues, or countless other issues. We use ThousandEyes Cloud Agents, which give us visibility into the underlying network paths our applications use, from our users’ endpoints to our hosted applications in the cloud. Cloud Agents allow us to understand the user experience in specific global regions.

It’s important to us that ThousandEyes’ “synthetic” testing is not at all theoretical. It’s based on real-time internet monitoring and is extremely valuable to us.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
ThousandEyes Path Visualization View for DevNet. This shows the network paths discovered when ThousandEyes agents visit the DevNet application page from different locations around the world.

We also use ThousandEyes synthetic transaction tests to automate UI testing, a process that would otherwise be repetitive and manual. ThousandEyes has a Recorder and IDE we can use to record users’ browser interactions, which then creates Transaction tests that integrate into a CI/CD pipeline.  As we push changes into our repo, ThousandEyes can execute tests continuously. And this continuous UI testing happens in the global regions users connect from.

ThousandEyes solves very practical problems for us and we continue to see new ways to apply it. For example, ThousandEyes alerts can be pushed into the AppDynamics interface. This consolidates network and application performance issues into one pane of glass, allowing NetOps and AppOps to break silos in quickly identifying the root cause of issues.

AppDynamics (“AppD”)


To optimize user experience, we need to know where our applications can be performance-optimized. Previously, we were using Prometheus with a custom Grafana dashboard to monitor our Kubernetes cluster – including each microservice’s usage of CPU, Memory, and other resources. But this combination did not provide the performance data that we really needed to get insight into our applications to improve them.

Today, with Cisco AppDynamics, we get extremely granular and valuable data, like the response time from individual API calls, or the time for full business transactions that employ microservices  and databases – for example, delivering educational content that calls our Learning Lab services, our video system, our quiz engine, our progress tracking microservice and our MySQL server. AppD gives us monitoring from the end user experience of our applications down into the infrastructure in the data center.

This level of intelligence is incredibly valuable. It gives us the opportunity to optimize the responsiveness of user experience by focusing our efforts in precisely the right places.

AppD offers different ways to meter and monitor various parts of the application stack. For example, we use its Javascript agent to monitor the browser application UI. We also use AppD’s SDK for Go for most of our microservices, which are written in Golang. We use a few different database services, like Cassandra and MongoDB; AppD has agents for those processes.

With this suite of monitoring functions, we can monitor end-to-end application performance, which is incredibly valuable to us. It helps us pinpoint performance issues quickly, and makes it easier to create new high-performing services.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
DevNet Application Performance Monitoring Using AppD. This diagram shows how DevNet leverages AppD for full performance monitoring from the frontend, to the backend and database.

Istio Service Mesh


Last year, we started tested Istio in our non-production environment. We have been using it to manage the microservices our platform is built on. We found its benefits compelling: it helps us manage, balance, and troubleshoot our increasingly complex web of software.

However, it also takes time for people to learn to use this tool. And it’s a high-stakes game, as misconfigurations in a service mesh can have profound impacts on networks and applications.

But we have found that it is worth the time investment in learning to use the tool. Istio has become one of our key diagnostic and management tools. We use it to analyze the impact on inter-service traffic from what can be subtle changes in microservice configurations. It allows us to apply automation to shift workloads when needed – securely and without having to rebuild the connections between services.

Istio also allows us to configure policy-based access control across microservices, again under programmatic control.

There is one pilot in particular that has a lot of promise: We are using Istio in A/B testing, to dynamically divide traffic in various proportions between stable and new apps and configurations. It makes it configuring and running tests much easier and efficient, and thus is helping us improve the pace of our application development. It frees up engineers so they can get to work on other projects more quickly.

Source: cisco.com

Saturday 22 May 2021

Choosing the Best Overlay Routing Architecture for EVPN

A Comparative Analysis of Architectures and Their Trade-offs

Organizations have many options when implementing an Ethernet Virtual Private Network (EVPN). Choice of network design with respect to overlay routing may be influenced by a combination of factors, including scalability, multi-tenancy, segmentation, and operational simplicity. Understanding the key differences among various overlay routing architectures makes it possible to evaluate and choose an implementation that offers a best fit for an organization.

Read More: 200-901: Developing Applications and Automating Workflows using Cisco Core Platforms (DEVASC)

This blog post compares the trade-offs across different overlay Anycast routing architectures in the context of overlay networks deployed using EVPN Integrated Routing and Bridging. It covers Centralized Routing, Distributed Asymmetric Routing, and Distributed Symmetric Routing. Note that this discussion is independent of data plane encapsulation and applies equally to IP and MPLS tunnel data paths.

Overlay Networks

Overlay networks have become ubiquitous across enterprise, data center, and service provider network architectures. They enable deployment of a simple non-blocking IP routed infrastructure with the flexibility to deploy multi-tenant unicast and multicast services on top. Overlay endpoints or workloads may be placed or moved anywhere across a non-blocking fabric, independent of overlay addressing and subnet assignments. A flexible and scalable IP Clos fabric provides reachability across edge and border devices. A VPN tunnel mesh across edge and border devices provides overlay connectivity between  connected endpoints (see Figure 1).


Figure 1: VPN overlay with simple non-blocking routing infrastructure.

There may be additional factors, including security and traffic engineering policies, to consider when deploying an overlay across different use cases. Reachability, however, is the least common denominator across all overlay use cases. For flexible workload placement and mobility that is independent of addressing and subnetting constraints, a multi-tenant overlay network must provide reachability across:

◉ Tenant endpoints within an IP subnet,
◉ Tenant endpoints in different IP subnets.

As intra-subnet overlay connectivity is enabled via layer 2 VPN bridging services deployed across fabric edge and optionally border devices, multiple options exist for overlay routed connectivity between endpoints in different subnets. The following will detail and compare trade-offs across three overlay Anycast routing architectures:

1. Centralized Routing
2. Distributed Asymmetric Routing
3. Distributed Symmetric Routing

1 – Centralized Anycast Routing Architecture


A centralized routing model connects endpoints to layer-2 EVPN gateways (GW) that provide VPN bridging. This enables intra-subnet flows across the overlay while all routing to endpoints in different subnets, within and outside the fabric, is centralized via designated Integrated Routing and Bridging (IRB) L2+L3 GWs.

First-hop routing for each overlay subnet is deployed using a subnet Anycast GW that is hosted on one or more designated IRB GW nodes. A key attribute defining this overlay routing architecture is that first-hop routing function for an overlay subnet is decoupled from the EVPN L2-GW edge that provides intra-subnet bridging service for that subnet. This decoupling results in first-hop routing for overlay endpoints across the fabric being “centralized” on designated IRB nodes. Note that the Anycast GW for each subnet is still distributed across these “centralized” IRB GW nodes.

It is common to deploy first-hop Anycast routing for all overlay subnets in a fabric on the same set of IRB nodes. While not necessarily required, this is often done for operational simplicity and optimal routing. It is also common for this first-hop routing function to be hosted on border nodes that also act as interconnect GWs to external L2 or L2/L3 domains. Optionally, these IRB nodes may also function as edge nodes and connect to local overlay endpoints, resulting in the model shown in Figure 2.


Figure 2: EVPN Centralized Routing Deployment Model

Control Plane Operation

A centralized approach essentially uses an EVPN overlay as a layer-2 VPN overlay, with the inclusion of the host IP along with the host MAC being optional in EVPN host route advertisements (see Figure 3). The host route is advertised by the egress L2 GW with layer 2 attributes that, on the ingress L2 GW and on the centralized IRB GW, result in:

◉ Import of the host MAC to the MAC VRF in the control plane.
◉ Host MAC reachability via layer-2 VPN encapsulation and tunnel to the egress GW.


Figure 3: Control plane operation with centralized routing.

In addition, IRB GW nodes also install layer-3 adjacencies to the remote host IP.  Host IP to MAC bindings for this purpose may be learnt on the IRB GW via:

◉ Advertising L2 GW learning the host IP by snooping and including the host IP in the EVPN host route advertisement.
◉ OR in data plane via ARP and ND packets received from the host.

Note that reachability to a remote layer-3 host adjacency is still resolved by host MAC reachability via a layer-2 VPN tunnel to the egress GW. In addition, IRB gateways may also proactively advertise the Anycast GW MAC/IP in the EVPN control plane for the purpose of avoiding duplicate ARP responses from redundant Anycast GWs. On the L2 GW, this results in L2 reachability to Anycast GW MACs in the MAC VRF, and local ARP suppression for Anycast GW IP ARP requests from hosts.

Data Plane Operation

For data plane operation (see Figure 4), intra-subnet flow destined to a remote host is bridged on the ingress L2 GW via a tunnel to the egress L2 GW, with the layer 2 VPN encapsulation advertised by the egress L2 GW. On the egress L2 GW, this layer 2 VPN encapsulation maps to a MAC VRF, where the packet is again bridged to the local host.

Inter-subnet flow destined to Anycast GW MAC is bridged on the ingress L2 GW to one of the centralized IRB GW via tunnel to the IRB GW with layer 2 VPN encapsulation advertised by the IRB GW. Packets are then routed on the IRB GW via layer-3 adjacency to the destination host IP. This results in the packet being encapsulated with the host MAC rewrite that resolves via tunnel to the egress L2 GW and layer 2 VPN encapsulation advertised by the egress L2 GW. On the egress GW, this layer 2 VPN encapsulation maps to the MAC VRF, where the packet is again bridged to the local host.


Figure 4: Data plane operation, showing intra-subnet flow and inter-subnet flow with centralized routing

Control Plane Scalability – Limited by “all subnets on centralized GWs”

Control plane scalability is limited by the fact that each IRB node that is part of the centralized Anycast GW cluster is required to program:

◉ Layer-3 (SVI) interfaces for ALL overlay subnets for which it is a first-hop GW.

◉ Layer-3 adjacencies to ALL overlay endpoints in these subnets.

◉ MAC VRFs for ALL overlay subnets for which it is a first-hop GW.

◉ MAC routes for ALL overlay endpoints in these subnets.

◉ IP host routes for ALL overlay endpoints across the fabric.

◉ Overlay tunnels to ALL edge nodes.

A simple deployment centralizes all overlay subnets on the same set of IRB nodes. In this case, the fabric wide scale of overlay subnets and endpoints is limited by the IRB device’s individual layer 3 interface, layer adjacency, and MAC route scale. Note that in this model, redundant nodes that are part of the same Anycast GW cluster do not contribute to overall fabric scale, since the same forwarding state needs to be replicated across all Anycast GW nodes.

Control Plane Scalability – At the cost of optimal routing

Alternatively, first-hop routing service for different subnets may be load-shared across multiple centralized Anycast GW clusters to reduce the scale on each IRB node.


Figure 5: First-hop routing for subnets hosted on different IRB nodes.

Figure 5 shows first-hop routing for two subnets hosted on the first two IRB nodes with routing for two other subnets hosted on the other two IRB nodes. However, this may result in a sub-optimal data path with an extra routing hop as shown in Figure 6. It also compromises the operational simplicity of being able to manage routing for all overlay subnets on the same IRB nodes.


Figure 6: This approach may introduce an extra routing hop, resulting in a sub-optimal data path that also compromises the operational simplicity of being able to manage routing for all overlay subnets on the same IRB nodes.

Sub-optimal Data Path – Local inter-subnet flows

The sub-optimal nature of inter-subnet routing in this approach applies to local inter-subnet flows that must always be bridged on the ingress L2 GW to the centralized IRB GW, only to be routed back to the ingress L2 GW. This results in a ‘traffic trombone effect’ (see Figure 7).


Figure 7: The ‘traffic trombone’ effect occurs when local inter-subnet flows that must always be bridged on the ingress L2 GW to the centralized IRB GW are routed back to the ingress L2 GW.

Operational Simplicity

Despite these sub-optimal scaling and data path properties, this approach is still a good trade-off in certain use cases for operational reasons:

◉ This approach provides operational simplicity of provisioning and managing first-hop routing and associated routing policies for all overlay subnets on designated nodes. As an example, for use cases where an overlay subnet is stretched across campus and DC domains, this approach allows you to manage inter-subnet and external routing policies for the subnet at a central point.

◉ Forwarding semantics, being similar to traditional IRB, are simple to understand, deploy, and operate.

◉ EVPN centralized routing design, in principle, aligns with legacy access/distribution layer-2 network design, where routing functionality is centralized and decoupled from layer-2 only access devices. An EVPN layer 2 overlay can be thought of as replacing a traditional layer-2 access network, with EVPN-IRB functionality on centralized distribution nodes being the traditional L2/L3 boundary. It is hence a conceptually easier transition from such legacy architectures.

Centralized Anycast GW Redundancy – just FYI

The Centralized Anycast GW approach across redundant IRB GWs introduces additional complexity that an operator should be aware of:

◉ If L2 GWs only advertise host MAC routes in the EVPN control plane, host layer-3 adjacencies are learnt on the Anycast GW via ARP and ND. Since adjacencies could be learnt on any of the redundant GWs, Anycast GWs must implement additional mechanisms to sync layer-3 host adjacencies across them. Alternatively, L2 GWs must implement MAC-IP learning via snooping and advertise the host MAC and IP via the EVPN control plane for Anycast GW nodes to learn host layer-3 adjacencies via EVPN.

◉ ARP requests for an Anycast GW IP from a host is flooded across the overlay and hence results in multiple ARP responses from redundant GWs. To avoid this, Anycast GWs must advertise the GW MAC-IP bindings upfront via the EVPN and L2 GWs must implement local ARP suppression. In the case of a VXLAN fabric, Anycast VTEP may also be used across redundant GWs to avoid multiple ARP responses.

2 – Distributed Asymmetric Routing Architecture


The distributed asymmetric approach is a variation of the centralized Anycast routing approach, with the layer 2/3 routing boundary pushed to fabric leaf nodes (see Figure 8). In this approach, first-hop Anycast GW functionality for an overlay subnet is deployed across ALL leaf nodes that now operate as IRB GWs (as opposed to being L2 GWs).


Figure 8: A Distributed Asymmetric Routing Architecture pushes the layer 2/3 routing boundary to fabric leaf nodes.

Control Plane Operation

Much like the centralized IRB approach, this approach also uses the EVPN overlay as a layer-2 VPN overlay. A slight difference is that the host IP is now required in the EVPN host route advertisement, along with the host MAC. Similar to centralized IRB operation, the host route is advertised by the egress GW with layer 2 attributes that, on the ingress GW, results in:

◉ Import of the host MAC to the MAC VRF in control plane.
◉ Host MAC reachability via layer-2 VPN encapsulation and tunnel to the egress GW.

IRB-capable nodes also install layer-3 adjacencies to the remote host IP with IP to MAC binding learnt via host routes. Reachability for remote layer-3 host adjacency is still resolved by host MAC reachability via a layer-2 VPN tunnel to the egress GW.

Data Plane Operation

While this approach enables EVPN routing and bridging functions to be co-located on EVPN leaf nodes, it has the same forwarding semantics as a centralized Anycast GW. The overlay routing function on the leaf IRB GW routes packets directly to the host’s layer-3 adjacency. “Asymmetric” in this context refers to the fact that this results in inter-subnet flows being “routed and bridged” on the ingress IRB GW and “bridged” on the egress IRB GW (Figure 9).


Figure 9: This approach is asymmetric in that inter-subnet flows are “routed and bridged” on the ingress IRB GW, and “bridged” on the egress IRB GW.

Control Plane Scalability – Limited by “all subnets everywhere”

Control plane scalability is even more severely limited by the fact that each IRB leaf node is now required to program:

◉ Layer-3 (SVI) interfaces for ALL overlay subnets in the IP VRF, even if it does not have locally attached hosts in that subnet.

◉ Layer-3 adjacencies for ALL overlay endpoints in these subnets, even if it does not have locally attached hosts in that subnet.

◉ MAC VRFs for ALL overlay subnets in the IP VRF, even if it does not have locally attached hosts in that subnet.

◉ MAC routes for ALL overlay endpoints in these subnets, even if it does not have locally attached hosts in that subnet.

◉ IP host routes for ALL overlay endpoints across the fabric in an IP VRF.

As a result, fabric wide scale of overlay subnets and endpoints is limited by each leaf device’s layer 3 interface, layer adjacency scale, and MAC route scale. Adding more GW devices to the Anycast GW cluster does not mitigate this limitation, as ALL leaf nodes host routing interfaces, layer-3 adjacencies, and MAC routes for ALL subnets and endpoints across the IP VRF.

Optimal Data Path – Local routing

In contrast to centralized IRB, local inter-subnet flows are always routed locally on the ingress GW, while inter-subnet flows across the fabric are always routed directly to the remote host (see Figure 10).


Figure 10: Local inter-subnet flows are always routed locally on the ingress GW. Inter-subnet flows across the fabric are always routed directly to the remote host.

Operational Simplicity – Traditional IRB forwarding

◉ Much like the centralized IRB approach, this approach also uses the EVPN overlay as a layer-2 overlay (akin to a traditional switching fabric). It treats remote IP endpoints as directly connected layer-3 adjacencies. Forwarding semantics, being similar to traditional IRB, are still simple to understand, deploy, and operate.

◉ Pushing the first-hop routing function to EVPN leaf GWs is a shift from traditional centralized routing designs. When migrating a legacy switching design, network designers must view EVPN fabric roles for network devices, independent from traditional access / distribution switching roles.

3 – Distributed Symmetric Routing Architecture


Much like the distributed asymmetric routing architecture, the distributed symmetric approach deploys the first hop Anycast GW function for an overlay subnet across ALL leaf nodes that operate as IRB GWs. However, for better scalability, symmetric IRB forwarding semantics and control plane operation are much different from that of asymmetric or centralized IRB that use EVPN to build a layer-2 VPN overlay. Instead of routing functionality being achieved via traditional IRB over the layer-2 overlay, the symmetric IRB approach uses EVPN as a single control plane to build:

◉ A layer-2 VPN overlay to enable intra-subnet bridging.
◉ A layer-3 VPN overlay to enable inter-subnet routing.

This additional layer-3 VPN overlay is the key differentiating attribute of a symmetric IRB architecture. It allows restriction of subnet provisioning on edge devices to locally attached subnets. This results in better scaling properties.


Figure 11: The additional layer-3 VPN overlay in a symmetric IRB architecture allows restriction of subnet provisioning on edge devices to locally attached subnets for better scaling properties.

Control Plane Operation

To build an additional layer-3 VPN overlay for inter-subnet routing, EVPN MAC+IP host routes are advertised with additional layer-3 VPN attributes to enable:

◉ Layer-3 VPN import to IP VRF in the control plane.
◉ Layer-3 VPN encapsulation in the data plane.

In summary, a single host route in the control plane is used to signal a layer-3 VPN host route to be installed in the IP VRF and a layer-2 VPN MAC route to be installed in MAC VRF, with the corresponding L3VPN and L2VPN encapsulations.

Data Plane Operation

◉ Intra-subnet bridging – Much like as is the case with the asymmetric and centralized approaches, bridging across the layer-2 VPN overlay is accomplished via layer-2 VPN encapsulation (L2 MPLS label or L2 VNI) that maps to the local MAC VRF. Bridged forwarding plane is identical across all three routing architectures.

◉ Inter-subnet routing – Inter-subnet flows are routed on the source (ingress) GW to the destination (egress) GW next-hop via a tunnel to the egress GW with L3VPN encapsulation. This L3VPN encapsulation is terminated and identifies the IP VRF at the egress GW, where the packet is again routed in the IP VRF to a locally connected endpoint. This routing data path is similar to traditional L3VPN, with the EVPN GWs acting as L3VPN PE devices.


Figure 12: Inter-subnet flows are routed on the source (ingress) GW to the destination (egress) GW next-hop via a tunnel to the egress GW with L3VPN encapsulation.

Control Plane Scalability – No more “all subnets everywhere”

A separate layer-3 VPN overlay allows inter-subnet host reachability on the source GW to be recursively resolved via a L3VPN tunnel to a destination GW next-hop. This differs from the asymmetric and centralized approaches where the source GW relies on layer-3 adjacencies to all remote hosts and their host MAC reachability via layer-2 VPN tunnels to achieve inter-subnet routing. As a result:

◉ The ingress GW no longer needs to be provisioned with routing interface (SVI) for ALL overlay subnets in an IP VRF. It only needs to be provisioned with the SVI interface for locally attached subnets.

◉ The ingress GW no longer has layer-3 adjacencies to ALL overlay endpoints in an IP VRF. It only has host routes for all end points via a tunnel to the destination GW next hop.

◉ The ingress GW no longer has MAC-VRFs for all overlay subnets in an IP VRF. It only has MAC-VRFs for locally attached subnets.

◉ The ingress GW no longer has MAC routes to ALL overlay endpoints in an IP VRF. It only has MAC routes for locally attached subnets.

◉ Ingress GW still has host routes to all endpoints in an IP VRF, unless a subnet is restricted to strictly one GW (or a multi-homing GW complex). In this case, it is possible for routing to be based on the subnet route alone.

Optimal Data Path

As in asymmetric IRB, local inter-subnet flows are always routed locally on the ingress GW, while inter-subnet flows across the fabric are always routed directly to the egress GW.

Extra TTL Decrement

Note that with this approach, an inter-subnet flow across two endpoints attached to the fabric goes via two routing hops instead of the usual single routing hop, as in traditional LANs connected via a router, or in the case of centralized and asymmetric IRB. This is not to say that the routing data path is sub-optimal. Rather, it is just an operational side effect of the packet being routed (instead of bridged) at the destination GW.

Operational Overhead – Separate L2VPN and L3VPN overlays

As opposed to centralized and asymmetric IRB architectures, the symmetric approach does result in separate layer-2 VPN and layer-3 VPN overlays to operate and manage. Together with the shift from traditional centralized routing to distributed routing across the fabric edge, this may result in a higher learning curve.

Source: cisco.com

Thursday 20 May 2021

Monitoring your indoor IoT environment – Cisco DNA Spaces IoT Services

IoT Services Overview

Cisco DNA Spaces is the world’s most powerful location platform that uses existing Wi-Fi infrastructure to give actionable insights and drive business outcomes. Cisco DNA Spaces IoT services has been transforming how businesses measure and interact with their environment at scale. Cisco IoT services has brought hardware, software, and services together to digitize spatial data into actionable intelligence. Businesses are planning occupancy monitoring, air quality testing, contact tracing, and in-room presence use cases with Cisco DNA Spaces to prepare workspaces for a post-pandemic reopening. Enabling all these use cases require seamlessly consuming a ton of data and working with a plethora of devices. So how does an enterprise monitor the health of their IoT environment in an ocean of devices broadcasting data continuously? Enter, Cisco IoT Services Device Monitoring.

IoT Services Components

The key components of the solution are comprised of Cisco DNA Spaces IoT Services, Cisco Catalyst 9800 Series Wireless Controllers, Cisco Access Points, and our IoT Partner Ecosystem. The specific roles of each piece of the solution are described below:

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

All components in the IoT Services architecture communicate with their peers over a data channel to forward measurements and a control channel to pass actionable information. For example, in the network architecture below, Access Points communicate with the Connector over a gRPC data plane while it communicates with the Controller over a CAPWAP control plane.

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

Data Volume


The vastly scalable data plane enables DNA Spaces  IoT services to ingest and process humongous volumes of data. Partner device broadcasts are either time-driven or event-driven. Beacons, for example, can broadcast advertisements at an astonishingly high frequency while some telemetry devices can be triggered only when certain conditions are fulfilled. As a result, the per-device transmission rate varies widely from every 100ms to once in several days. On average IoT services process, more than 300 million messages per day, and data volume is increasing every day as more and more devices are being scanned.


Needle in a haystack

Analyzing the millions of packets consumed by IoT Gateways, DNA Spaces IoT Services Device monitoring identifies and predicts possible issues in the network’s IoT Infrastructure.

Device Monitoring


Network snapshot

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

IoT Services provides a snapshot to quickly identify if any Gateway is down in the environment. It also identifies the total number of transmitting IoT devices and how many devices are active currently. This quickly provides customers with an idea of how cluttered the BLE environment in the enterprise may be.

Battery monitoring

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

Unmonitored devices running out of battery is one of the primary causes of IoT network failures. It adversely affects almost all IoT use cases such as wayfinding, sensor telemetry. Devices advertising with high frequency or transmission power are particularly susceptible to battery drainage. Device monitoring provides a concise view of identifying devices that have critical or low battery life. It also provides information to locate the devices on a map so that network administrators can easily find the device and change its battery.

Active devices

Cisco DNA, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Career, Cisco Guides

The active devices count provides the number of devices the gateways have scanned in the last 5 mins. If there are too many active devices, it may indicate unmitigated rogue transmissions on the network. On the other hand, if there are too few active devices, it may indicate malfunctioning devices or data channel setup issues.

We are integrating more and more metrics to provide powerful insights into your IoT Network through device monitoring. In combination with network policies, device monitoring can truly transform IoT network management.

Source: cisco.com