Showing posts with label VXLAN. Show all posts
Showing posts with label VXLAN. Show all posts

Saturday 26 March 2022

Why Transition to BGP EVPN VXLAN in Enterprise Campus

Network Virtualization Convergence in Enterprise Campus

Campus networks are the backbone of enterprises providing connectivity to critical services and applications. Throughout time many of these networks were deployed with a variety of overlay technologies including technologies to accomplish the desired outcome. While these traditional overlay technologies accomplished the technical and business requirements, many of them lacked manageability and scalability introducing complexity into the network. The industry-standard BGP EVPN VXLAN is a converged overlay solution providing unified control-plane-based layer-2 extension and layer-3 segmentation over IP underlay. The purpose-built technology for Enterprise campus and datacenter addresses the well-known classic networking protocols challenges while providing L2/L3 network services with greater flexibility, mobility, and scalability.

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification
Fig #1: BGP EVPN VXLAN converges Layer 2 and Layer 3

Legacy Layer 2 Overlay Networks Departure


Enterprise campus networks have historically been deployed with several types of Layer 2 overlay network extensions as products and technologies evolved. Classic data-plane based Layer 2 extended networks built upon a flood-n-learn basis can be significantly simplified, scaled, and optimized when migrating away to next-generation BGP EVPN VXLAN solution:

◉ STP – Enterprise campus networks have operated spanning-tree protocol (STP) since its inception. Several enhancements and alternatives have been developed to simplify and optimize STP complexity, however, it continued to be challenging. The BGP EVPN VXLAN replaces STP with an L2 overlay enabling new possibilities to IT including controlling flood-domain size, suppressing redundant ARP/ND network traffic, and seamless mobility while retaining the original IPv4/v6 address plan when transitioning from Distribution switch or centralized firewall gateway running over STP network.

◉ 802.1ad – The IEEE 802.3ad (QinQ) is a common multi-tenant Layer 2 network solution. The dual-stack IEEE 802.1Q header tunnels individual tenant VLANs over limited and managed core VLANs to assist in reducing the bridging domain and overlapping tenant VLAN IDs across the core network. BGP EVPN VXLAN enables the opportunity to transform the Layer 2 backbone network with a simplified IP transport utilizing VXLAN and continue to bridge single or dual-stack IEEE 802.1Q VLAN across the fabric. 

◉ L2TPv3 – Layer 2 Protocol Tunnel version 3 (L2TPv3) provides simple point-to-point L2 overlay extension solution over an IP core between statically paired remote network devices. Such flood-n-learn based Layer 2 overlay networks can be migrated to BGP EVPN VXLAN providing far advanced and flexible Layer 2 extension solutions across an IP core network. 

◉ VPWS/VPLS – The standards ratified several Layer 2 network extensions as the industry evolved towards high-speed Metro-Ethernet networking across MAN/WAN. The Enterprise networks quickly evolve adopting Ethernet over MPLS (EoMPLS) or Virtual Private LAN Service (VPLS) solution operating over IP/MPLS based backbone. The Enterprise network can be simplified, optimized, and resilient with BGP EVPN VXLAN supporting flexible Layer 2 overlay topologies with control-plane based Layer 2 extensions that assist in improving end-to-end network performance and user experience. 

Traditional Layer 3 Overlays Convergence


Like Layer 2 extended networks, segmented Layer 3 networks can be deployed with various overlay technologies. The parallel running protocol set with each supporting either routing or bridging may add complexity as network growth and demands expand linearly. As BGP EVPN VXLAN converges routing and bridging capabilities it assists in reducing control-plane and operational tasks resulting in simplicity, scale, and resiliency.

◉ Multi-VRF – A simple hop-by-hop Layer 3 virtual network segmenting Layer 3 physical interface into logical IEEE 802.Q VLAN for each virtual network small to mid-size network environments. As segmentation requirements increase, IT operational challenges and control-plane overhead to manage Multi-VRF also increase. The BGP EVPN leverages IP VRF to dynamically build a segmented routed network environment and with VXLAN the data-plane segmentation is managed at the network edge enabling simplified underlay IP core and scalable Layer 3 overlay routed network solution. 

◉ GRE – An ideal solution for building overlay networks across IP networks without implementing hop-by-hop in the underlay network. The GRE-based overlay solution supports limited point-to-point or point-to-multipoint topologies.  Following similar principles, the BGP EVPN VXLAN can simplify the network with a single control plane, dynamically build VXLAN tunnels, and supports flexible overlay routing topologies. The ECMP based underlay and overlay networks support best-in-class resiliency for mission-critical networks.  

◉ MPLS VPN – The MP-BGP capabilities have been widely adopted in large Enterprises addressing network segmentation across self-managed IP/MPLS managed networks. The well-proven and scalable MPLS VPN in Enterprise overcomes several alternative technologies challenges using shim-layer label switching solution. The MPLS VPN enabled Enterprise networks can extend existing MP-BGP designs and transition VPNv4/VPNv6 to new L2VPN EVPN address-family supporting seamless migration. The edge-to-edge VXLAN data-plane can converge MPLS VPNs, mVPN, and VPLS overlay into a single unified control plane and enable enhanced integrated routing and bridging function. It further assists in greatly simplifying IP core network without MPLS LDP protocol dependencies across the paths. 

Cisco Catalyst 9000 – Seamless and Flexible BGP EVPN VXLAN Transition


Transitioning from classic products and technologies has never been an easier task, especially when mission-critical downtime is practically impossible. The Cisco Catalyst 9000 combined with 30+ years of software innovation with the industry’s most sophisticated network operating system Cisco IOS-XE® provides great levels of flexibility to seamlessly adapt BGP EVPN VXLAN for Enterprise customers as part of an existing operation or planning to begin a new networking journey while maintaining full-backward compatibility with classic products and overlays networks supporting non-stop business communications. 

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification
Fig #2: BGP EVPN VXLAN design alternatives

The end-to-end network and rich feature integration can be enabled independent of how underlying network infrastructure is built as illustrated above.

  Layer 3 Access Cisco StackWise Virtual  ESI Layer 2 Multihome 
Leaf Layer  Access  Distribution  Distribution 
Spine Layer   Core or other     
Border Layer   Data Center ACI, WAN, DMZ or more     
Overlay Network Type Support   Layer 3 Routed, Distributed AnyCast Gateway (Symmetric IRB), Centralized Gateway (Asymmetric IRB)
Layer 2 Cross-Connect 
   
Overlay Unicast Support   IPv4 and IPv6 Unicast     
Overlay Multicast Support   IPv4 and IPv6 – Tenant Routed Multicast     
Wireless Network Integration   Local Mode – Central Switching
FlexConnect Mode – Central and Distributed Local Switching 
   
Data Center Integration   BGP EVPN VXLAN – Common EN/DC Fabric
Cisco ACI – Nexus 9000 Border Layer 3 Handoff 
   
Multi-site EVPN Domain   Campus Catalyst 9000 switches extending fabric with Nexus 9000 Multi-site Border Gateway integration     
External Domain Handoff   L2: Untag, 802.1Q, 802.1ad, EoMPLS, VPLS
L3: Multi-VRF, MPLS VPN, SD-WAN, GRE 
   
Data Plane load sharing   L3: ECMP  L2: Per flow Port-Channel Hash
L3: ECMP
Multicast:S, G + Next Hop
L2: Per Port-VLAN Load Balancing
L3: EMCP
Multicast: S, G + Next Hop
System Resiliency Cisco StackWise-1T
Cisco StackWise-480
Cisco StackPower
Fast Reload
Stateful Switchover (SSO)
Ext. Fast Software Upgrade
In-Service Software Upgrade (ISSU)
Cisco StackWise Virtual
Stateful Switchover (SSO)
In-Service Software Upgrade (ISSU)
Stateful Switchover (SSO)
In-Service Software Upgrade (ISSU)
Network Resiliency BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion
L2: EtherChannel, UDLD, etc.
BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion
L2: UDLD, etc.
BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion

Scalable Architecture Matters


IT organizations adopting the BGP EVPN VXLAN solution must consider how to scale multi-dimensionally when building large-scale fabrics. This demands call-to-action to design the right architecture based on proven principles in the networking world. Regardless of physical or virtual networking, it shall be designed with an appropriate level of hierarchy to support the best-in-class scalable solution supporting a large enterprise network. The smaller fault domains and condensed network topologies in core-layer enable resilient networks are well-known benefits of hierarchical networking.

As the number of EVPN leaf nodes increases overlay prefixes and the blast radius in the network grows. The network architects shall consider building a structured Multi-Site overlay networking solution allowing Enterprise campus to grow by dividing fabric domains in different boundaries and using fabric border gateways to interconnect all together.

Stay tuned we’ll share more thoughts on how Cisco Catalyst 9000 and Nexus 9000 can bring next-generation BGP EVPN VXLAN with Multi-site solutions. And as always, if you are already on the journey to design and build a scalable end-to-end BGP EVPN VXLAN campus network, then simply reach out to your Cisco sales team to partner with you and enable the vision. 

Source: cisco.com

Friday 4 December 2020

All Tunnels Lead to GENEVE

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides

As a global citizen, I’m sure you came here to read about Genève (French) or Geneva (English), the city situated in the western part of Switzerland. It’s a city or region famous for many reasons including the presence of a Cisco R&D Center in the heart of the Swiss Federal Institute of Technology in Lausanne (EPFL). While this is an exciting success story, the GENEVE I want to tell you about is a different one.

GENEVE stands for “Generic Network Virtualization Encapsulation” and is an Internet Engineering Task Force (IETF) standards track RFC. GENEVE is a Network Virtualization technology, also known as an Overlay Tunnel protocol. Before diving into the details of GENEVE, and why you should care, let’s recap the history of Network Virtualization protocols with a short primer.

Network Virtualization Primer

Over the course of years, many different tunnel protocols came into existence. One of the earlier ones was Generic Routing Encapsulation (GRE), which became a handy method of abstracting routed networks from the physical topology. While GRE is still a great tool, it lacks two main characteristics that hinder its versatility:

1. The ability to signal the difference of the tunneled traffic, or original traffic, to the outside—the Overlay Entropy—and allow the transport network to hash it across all available links.

2. The ability to provide a Layer-2 Gateway, since GRE was only able to encapsulate IP traffic. Options to encapsulate other protocols, like MPLS, were added later, but the ability to bridge never became an attribute of GRE itself.

With the limited extensibility of GRE, the network industry became more creative as new use-cases were developed. One approach was to use Ethernet over MPLS over GRE (EoMPLSoGRE) to achieve the Layer-2 Gateway use case. Cisco called it Overlay Tunnel Virtualization (OTV). Other vendors referred to it as Next-Generation GRE or NVGRE. While OTV was successful, NVGRE had limited adoption, mainly because it came late to Network Virtualization and at the same time as the next generation protocol, Virtual Extensible LAN (VXLAN), was already making inroads.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
A Network Virtualization Tunnel Protocol

VXLAN is currently the de-facto standard for Network Virtualization Overlays. Based on the Internet Protocol (IP), VXLAN also has an UDP header and hence belongs to the IP/UDP-based encapsulations or tunnel protocols. Other members of this family are OTV, LISP, GPE, GUE, and GENEVE, among others. The importance lays in the similarities and their close relation/origin within the Internet Engineering Task Force’s (IETF) Network Virtualization Overlays (NVO3) working group.

Network Virtualization in the IETF


The NVO3 working group is chartered to develop a set of protocols that enables network virtualization for environments that assume IP-based underlays—the transport network. A NVO3 protocol will provide Layer-2 and/or Layer-3 overlay services for virtual networks. Additionally, the protocol will enable Multi-Tenancy, Workload Mobility, and address related issues with Security and Management.

Today, VXLAN acts as the de-facto standard of a NVO3 encapsulation with RFC7348 ratified in 2014. VXLAN was submitted as an informational IETF draft and then become an informational RFC. Even with its “informational” nature, its versatility and wide adoption in Merchant and Custom Silicon made it a big success. Today, we can’t think of Network Virtualization without VXLAN. When VXLAN paired up with BGP EVPN, a powerhouse was created that became RFC8365—a Network Virtualization Overlay Solution using Ethernet VPN (EVPN) that is an IETF RFC in standards track.

Why Do We Need GENEVE if We Already Have What We Need?


When we look to the specifics of VXLAN, it was invented as a MAC-in-IP encapsulation over IP/UDP transport, which means we always have a MAC-header within the tunneled or encapsulated packets. While this is desirable for bridging cases, with routing it becomes unnecessary and could be optimized in favor of better payload byte usage. Also, with the inclusion of an inner MAC-header, signaling of MAC to IP bindings becomes necessary, which needs either information exchanged in the control-plane or, much worse, flood-based learning.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Compare and Contrast VXLAN to GENEVE Encapsulation Format

Fast forward to 2020, GENEVE has been selected as the upcoming “standard” tunnel protocol. While the flexibility and extensibility for GENEVE incorporates the GRE, VXLAN, and GPE use-cases, new use-cases are being created on a daily basis. This is one of the most compelling but also most complex areas for GENEVE. GENEVE has a flexible option header format, which defines the length, the fields, and content depending on the instruction set given from the encapsulating node (Tunnel Endpoint, TEP). While some of the fields are simple and static, like bridging or routing, the fields and format used for telemetry or security are highly variable for hop-by-hop independence.

While GENEVE is now an RFC, GBP (Group Based Policy), INT (In-band Network Telemetry) and other option headers are not yet finalized. However, the use-case coverage is about equal to what VXLAN is able to do today. Use cases like bridging and routing for Unicast/Multicast traffic, either in IPv4 or IPv6 or Multi-Tenancy, have been available for VXLAN (with BGP EVPN) for almost a decade. With GENEVE, all of these use-cases are accessible with yet another encapsulation method.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
GENEVE Variable Extension Header

With the highly variable but presently limited number of standardized and published Option Classes in GENEVE, the intended interoperability is still pending. Nevertheless, GENEVE in its extensibility as a framework and forward-looking technology has great potential. The parity of today’s existing use cases for VXLAN EVPN will need to be accommodated. This is how the IETF prepared BGP EVPN from its inception and more recently published the EVPN draft for GENEVE.

Cisco Silicon Designed with Foresight, Ready for the Future


While Network Virtualization is already mainstream, the encapsulating node or TEP (Tunnel Endpoint) can be at various locations. While a tunnel protocol was often focused on a Software Forwarder that runs on a simplified x86 instruction set, mainstream adoption is often driven by the presence of Software as well as Hardware forwarder, the latter built into the switch’s ASIC (Merchant or Custom Silicon). Even though integrated hybrid overlays are still in their infancy, the use of Hardware (the Network Overlay) and Software (the Host Overlay) in parallel are widespread, either in isolation or as ships in the night. Often it is simpler to upgrade the Software forwarder on a x86 server and benefit from a new encapsulation format. While this is generally true, the participating TEPs require consistency for connections needed with the outside world and updating the encapsulation to such gateways is not a simple matter.

In the past, rigid Router or Switch silicon prevented fast adoption and evolution of Network Overlay technology. Today, modern ASIC silicon is more versatile and can adapt to new use cases as operations constantly change to meet new business challenges. Cisco is thinking and planning ahead to provide Data Center networks with very high performance, versatility, as well as investment protection. Flexibility for network virtualization and versatility of encapsulation was one of the cornerstones for the design of the Cisco Nexus 9000 Switches and Cloud Scale ASICs.

We designed the Cisco Cloud Scale ASICs to incorporate important capabilities, such as supporting current encapsulations like GRE, MPLS/SR and VXLAN, while ensuring hardware capability for VXLAN-GPE and, last but not least, GENEVE. With this in mind, organizations that have invested in the Cisco Nexus 9000 EX/FX/FX2/FX3/GX Switching platforms are just a software upgrade away from being able to take advantage of GENEVE.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Cisco Nexus 9000 Switch Family

While GENEVE provides encapsulation, BGP EVPN is the control-plane. As use-cases are generally driven by the control-plane, they evolve as the control-plane evolves, thus driving the encapsulation. Tenant Routed Multicast, Multi-Site (DCI) or Cloud Connectivity are use cases that are driven by the control-plane and hence ready with VXLAN and closer to being ready for GENEVE.

To ensure seamless integration into Cisco ACI, a gateway capability becomes the crucial base functionality. Beyond just enabling a new encapsulation with an existing switch, the Cisco Nexus 9000 acts as a gateway to bridge and route from VXLAN to GENEVE, GENEVE to GENEVE, GENEVE to MPLS/SR, or other permutations to facilitate integration, migration, and extension use cases.

Leading the Way to GENEVE


Cisco Nexus 9000 with a Cloud Scale ASIC (EX/FX/FX2/FX3/GX and later) has extensive hardware capabilities to support legacy, current, and future Network Virtualization technologies. With this investment protection, Customers can use ACI and VXLAN EVPN today while being assured to leverage future encapsulations like GENEVE with the same Nexus 9000 hardware investment. Cisco thought leadership in Switching Silicon, Data Center networking and Network Virtualization leads the way to GENEVE (available in early 2021).

If you are looking to make your way to Geneve or GENEVE, Cisco makes investments in both for the past, present, and future of networking.

Tuesday 25 August 2020

Multi-Site Data Center Networking with Secure VXLAN EVPN and CloudSec

Transcending Data Center Physical Needs


Maslow’s Hierarchy of Needs illustrates that humans need to fulfill base physiological needs—food, water, warmth, rest—in order to pursue higher levels of growth. When it comes to data center and Data Center Networking (DCN), meeting the physical infrastructure needs are the condition on which the next higher-level capabilities—safety and security—are constructed.

Satisfying the physical needs of a data center can be achieved through the concepts of Disaster Avoidance (DA) and Disaster Recovery (DR).

◉ Disaster Avoidance (DA) can be built on a redundant Data Center configuration, where each data center is its own Network Fault Domain, also called an Availability Zone (AZ).

◉ Building redundancy between multiple Availability Zones creates a Region.

◉ Building redundant data centers across multiple Regions provides a foundation for Disaster Recovery (DR).

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

Availability Zones within a Region

Availability Zones (AZ) are made possible with a modern data center network fabric with VXLAN BGP EVPN. The interconnect technology, Multi-Site, is capable of securely extending data center operation within and between Regions. A Region can consist of connected and geographically dispersed on-premise data centers and the public cloud. If you are interested in more details about DA and DR concepts, watch the Cisco Live session recording “Multicloud Networking for ACI and NX-OS Enabled Data Center Fabrics“.

With the primary basic need for availability through the existence of DA and DR in regions achieved, we can investigate data center Safety needs as we climb the pyramid of Maslow’s hierarchy.

Safety and Security: The Second Essential Need


The data center is, of course, where your data and applications reside—email, databases, website, and critical business processes. With connectivity between Availability Zones and Regions in place, there is a threat of exposing data to threats once it moves outside the confines of the on-premise or colocation centers. That’s because data transfers between Availability Zones and Regions generally have to travel over public infrastructure. The need for such transfers is driven by the requirement to have highly-available applications that are supported by redundant data centers. As data leaves the confinement of the Data Center via an interconnect, safety measures must ensure the Confidentiality and Integrity of these transfers to reduce the exposure to threats. Let’s examine the protocols that make secure data center interconnects possible.

DC Interconnect Evolves from IPSec to MACSec to CloudSec


About a decade ago, MACSec or 802.1AE became the preferred method of addressing Confidentiality and Integrity for high speed Data Center Interconnects (DCI). It superseded IPSec because it was natively embedded into the data center switch silicon (CloudScale ASICs). This enabled encryption at line-rate with minimal added latency or increase in packet size overhead. While these advantages were an advancement over IPSec, MACSec’s shortcomings arise because it can only be deployed between two adjacent devices. When Dark Fiber or xWDM are available among data centers this is not a problem. But often such a fully-transparent and secure service is too costly or not available. In these cases, the choice was to revert back to the more resource-consuming IPSec approach.

The virtue of MACSec paired with the requirements of Confidentiality, Integrity, and Availability (CIA) results in CloudSec. In essence, CloudSec is MACSec-in-UDP using Transport Mode, similar to ESP-in-UDP in Transport Mode as described in RFC3948. In addition to the specifics of transporting MACSec encrypted data over IP networks, CloudSec also carries a UDP header for entropy as well as an encrypted payload for Network Virtualization use-cases.

Cisco Exam Prep, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Prep

CloudSec carries an encrypted payload for network virtualization.

Other less efficient attempts were made to achieve similar results using, for example, MACSec over VXLAN or VXLAN over IPSec. While secure, these approaches just stack encapsulations and incur higher resource consumption. CloudSec is an efficient and secure transport encapsulation for carrying VXLAN.

Secure VXLAN EVPN Multi-Site using CloudSec


VXLAN EVPN Multi-Site provides a scalable interconnectivity solution among Data Center Networks (DCN). CloudSec provides transport and encryption. The signaling and key exchange that Secure EVPN provides is the final piece needed for a complete solution.

Secure EVPN, as documented in the IETF draft “draft-sajassi-bess-secure-evpn” describes a method of leveraging the EVPN address-family of Multi-Protocol BGP (MP-BGP). Secure EVPN provides a similar level of privacy, integrity, and authentication as Internet Key Exchange version 2 (IKEv2). BGP provides the capability of a point-to-multipoint control-plane for signaling encryption keys and policy exchange between the Multi-Site Border Gateways (BGW), creating pair-wise Security Associations for the CloudSec encryption. While there are established methods for signaling the creation of Security Associations, as with IKE in IPSec, these methods are generally based on point-to-point signaling, requiring the operator to configure pair-wise associations.

A VXLAN EVPN Multi-Site environment creates the ability to have an any-to-any communication between Sites. This full-mesh communication pattern requires the pre-creation of the Security Associations for CloudSec encryption. Leveraging BGP and a point-to-multipoint signaling methods becomes more efficient given that the Security Associates stay pair-wise.

Secure VXLAN EVPN Multi-Site using CloudSec provides state-of-the art Data Center Interconnect (DCI) with Confidentiality, Integrity, and Availability (CIA). The solution builds on VXLAN EVPN Multi-Site, which has been available on Cisco Nexus 9000 with NX-OS for many years.

Secure VXLAN EVPN Multi-Site is designed to be used in existing Multi-Site deployments. Border Gateways (BGW) using CloudSec-capable hardware can provide the encrypted service to communicate among peers while continuing to provide the Multi-Site functionality without encryption to the non-CloudSec BGWs. As part of the Secure EVPN Multi-Site solution, the configurable policy enables enforcement of encryption with a “must secure” option, while a relaxed mode is present for backwards compatibility with non-encryption capable sites.

Secure VXLAN EVPN Multi-Site using CloudSec is available in the Cisco Nexus 9300-FX2 as per NX-OS 9.3(5). All other Multi-Site BGW-capable Cisco Nexus 9000s are able to interoperate when running Cisco NX-OS 9.3(5).

Configure, Manage, and Operate Multi-Sites with Cisco DCNM


Cisco Data Center Network Manager (DCNM), starting with version 11.4(1), supports the setup of Secure EVPN Multi-Site using CloudSec. The authentication and encryption policy can be set in DCNM’s Fabric Builder workflow so that the necessary configuration settings are applied to the BGWs that are part of a respective Multi-Site Domain (MSD). Since DCNM is backward compatible with non-CloudSec capable BGWs, they can be included with one click in DCNM’s web-based management console. Enabling Secure EVPN Multi-Site with CloudSec is just a couple of clicks away.

Sunday 13 October 2019

Continuing innovations on Nexus9K ITD – Additional server load-balancing use cases

A couple months ago we released the new Cisco Innovated Intelligent Traffic Distribution (ITD) features on NX-OS 9.3.1. In this latest addition to Nexus 9000, we introduced ITD over VXLAN and ITD with destination NAT. The Cisco ITD feature in NX-OS was developed to addresses concerns with respect to capacity limitation on network service appliances in a multi-terabit environment, while providing a hardware-based scalable solution for Layer 3 and Layer 4 traffic distribution and redirection. These are the primary use cases for ITD a L3-L4 based load balancing across network service nodes or web servers and traffic redirection and distribution to WAN Optimizers or Web Proxies.

Benefits of ITD includes:


◈ Simplified provisioning during scaling of services nodes(scale-up);

◈ Provides line rate traffic load balancing;

◈ Health monitoring, failure detection and recovery; and

◈ Unlike ECMP, ITD provides even distribution of traffic and more granular control on traffic distribution

ITD over VXLAN


In a VXLAN fabric architecture, the endpoints, such as clients, physical servers, and virtual servers, are distributed across the fabric. Traffic flow from and to these clients and servers needs to be load-balanced in this fabric environment. With this ITD release, the single-switch ITD solution has been expanded to the VXLAN fabric so that now the fabric will act as a massive load-balancer. The NX-OS 9.3.1 release covers only the VIP-based load balancing mechanism in a VXLAN scenario, which means servers and clients can be connected anywhere in the fabric and glean the benefit of this fabric-based load-balancing function.

Cisco Prep, Cisco Tutorials and Materials, Cisco Learning, Cisco Online Exam, Cisco Data Center

Traffic flow from and to clients and servers in a fabric environment using ITD

ITD with NAT


Due to security reasons and a need for IP space conservation, customers look at NAT solutions to reuse the private IP address and hide the real-IP of the servers or services. Prior to this release, ITD was supported with Direct Server Return (DSR) mode. DSR mode is where clients have the visibility into the real-IP address of the servers/services. These servers were configured with the same public Virtual IP address (VIP), and servers reply directly to clients with the VIP as source IP bypassing the ITD. With this feature in NX-OS 9.3.1, clients no longer have visibility into real-IP’s of servers/services endpoints. Now, ITD on the switch will perform load balancing as well as NAT functionality, and ITD with destination NAT changes the destination address of the IP header. This helps redirecting the incoming packets with a destination of public IP to a real server private IP inside the network. The reverse path of the packet flow also follows the same approach, such as translating source address/real server IP to the VIP address, and then forwarding the traffic to the clients. ITD with destination NAT is applicable only in standalone switch today.  ITD w/ NAT will be supported over VXLAN fabric in future releases.

Cisco Prep, Cisco Tutorials and Materials, Cisco Learning, Cisco Online Exam, Cisco Data Center

Clients sending traffic to the ITD virtual IP address (20.1.1.1)

In the above example, clients send the traffic to the ITD virtual IP address (20.1.1.1), assuming it as real destination IP of the server. ITD switch translates and load balances the traffic to one of the server’s private IP address by adding its own IP as the source IP. The return traffic from the server is translated by ITD to its own VIP as source IP and forwarded back to the client. This way the traffic gets load balanced across the servers behind NAT without exposing the real-IP of servers to clients.

Saturday 5 October 2019

Configuration Compliance in DCNM 11

We discussed Using DCNM 11 for Easy Provisioning of Networks and VRF’s. Today, we are continuing the discussion by featuring how DCNM empowers compliance of the configurations defined by a user.

Validation of configuration forms an integral part of any Network Controller. Configurations need to be pushed down from the controller to the respective switches as intended by the user. More importantly, configurations need to be in sync and in compliance with the expressed intent at all times. Any deviation from the intended configuration has to be recognized, reported, and remediated – this approach is often described as “closed loop.” In the DCNM LAN Fabric install mode, Configuration Compliance is supported for VXLAN EVPN networks (within Easy Fabrics) as well as traditionally built networks within an External Fabric.

Configuration Compliance is embedded and integrated within the DCNM Fabric builder for all configuration including underlay, overlay, interfaces and every other configuration that is driven through the DCNM policies.

The user typically builds intent for the fabric customizing the various fabric setting options as well a combination of best practice and custom templates. Once the intent is saved and pushed out by DCNM, it periodically monitors what is running in the switches and tracks if there was any Out-of-Band change made in any function of the switch using CLI or another method. If changes are made differing from the applied intent, DCNM will mark the switches as Out-of-Sync indicating a violation in compliance. This warning to the user provides information about the running configuration of the respective switch does not match the intent defined in DCNM. The Out-of-Sync state is indicated by a colour code in the topology view as well as tagged with Out-of-Sync in the tabular view which lists all the switches in a fabric.

Configuration Compliance status with color codes

While the general concept of Configuration Compliance provides a simple colored representation of the state across the nodes, DCNM also generates a side-by-side diff view of the running configuration and expected configuration for each switch.

This diff in configuration is intended to provide the user a full picture of why a particular switch was marked out of compliance aka OUT-OF-SYNC. While at it, Configuration Compliance function provides a set of pending configurations that once pushed to the switch using DCNM, will bring the switch back to compliance aka IN-SYNC. The set of pending configurations are intelligently derived using a model-based approach that is agnostic to commands configured using CLI.

Side-by-side diff generated on Out-of-SYNC

While Configuration Compliance runs periodically, DCNM also provides an on-demand option to “Re-sync” the entire fabric or individual switches to immediately trigger compliance check.

View the demo below to see a walk through of performing configuration compliance in DCNM 11

Tuesday 27 August 2019

VXLANv6 – VXLANv-what?

Virtual Extensible LAN (also known as VXLAN) is a network virtualization technology that attempts to address the scalability problems associated with large cloud computing deployments. With the recent launch of Cisco’s VXLANv6, we’ve added the the Cisco overlay, and run it over an IPv6 transport network (underlay). Not only is our VXLANv6 fully capable of transporting IPv6, it can also handle IPv4 payloads, an important distinction as many application and services still require IPv4.

In the near future, VXLANv6 will allow a consistent IPv6 approach, both in the underlay as well as the overlay. With the newly shipped Cisco NX-OS 9.3(1) release that delivers VXLANv6, our customers can take advantage of this new exciting technology today.

In this blog we are going to talk about

◈ A brief overview of VXLANv6

◈ Expansibility and Investment Protection with VXLANv6

◈ IPv4 and IPv6 Coexistence

◈ Where are we going with VXLANv6

Cisco Tutorials and Materials, Cisco Guides, Cisco Certifications, Cisco Online Exam

Many years ago whenI was struggling to get my modem working, I remember reading that an IETF draft for Internet Protocol version 6 (IPv6) had been filed. At that point of time, the reality of IPv6 was so far away we talked about retirement before we even considered widespread adoption. But as it always is in tech, everything comes around much sooner than one anticipates. While IPv6 had a difficult start, it’s now become a table stakes requirement for Applications and Services.

With Network Virtualization, it became easy to tunnel both IPv6 and IPv4 over the top of networks built with IPv4. In these traditional IPv4-Overlay cases, the Tunnel Endpoint (TEP) as well as the transport network (Underlay) reside in the IPv4 address space. The Applications and Services exist in a different addressing space (Overlay), which could be IPv4, IPv6 or Dual-Stack enabled; v4v6-over-v4 is a common theme these days. In the last few years, VXLAN has become a defacto standard for an overlay as it is employed both as a network-based overlay as well as a host-based overlay. VXLAN as the data plane, together with BGP EVPN as a control-plane, has become the prime choice of deployment for the new-age spine-leaf based data centers.

With the expansion of network virtualization using virtual machine and container workloads, infrastructure resources like IP addresses have to be reserved not only for the applications and services, but also for the infrastructure components itself. As a result, overlap of the IP address space is often seen between the underlay and overlay, given the exhaustion in the uniqueness of RFC1918 addresses.

Below are the top reasons you should care about VXLANv6


Reason 1: One of the most difficult scenarios for overlapping address space is when it comes to network operations, trouble-shooting, and monitoring. The IP addresses used for the management and monitoring of the infrastructure are often required to be unique across the different devices. Also, the IP subnets for the management and monitoring stations have the same requirement, and, there should be no overlap between management and managed devices.The alternative is network address translation (NAT).

Reason 2: The exhaustion of unique IP addresses is just one of many cases that drives us towards IPv6. Other use-cases include government regulation, compliancy demands, or simple ease of infrastructure IP addressing. While we were reviewing the use-cases around IPv6 infrastructure addressing together with the current install base of technology and devices, one simple solution became obvious – VXLAN over an IPv6 underlay or in short VXLANv6.

Reason 3: VXLANv6 allows us to use a well-known overlay technology, namely VXLAN, and run it over an IPv6 transport network (Underlay). In the case of VXLANv6, the VXLAN Tunnel Endpoints (VTEPs) are addressed with a global IPv6 address associated with a loopback interface. The reachability of the VTEPs is achieved by using either IPv6 Link-Local or IPv6 global addressing along with an IPv6 capable routing protocol like IS-IS, OSPFv3 or BGP. Considering the option of using IPv6 Link-Local addressing, the subnet calculation and address assignment can be optimized and the underlay setup duration can be significantly reduced.

In addition to the VTEP and underlay topology and reachability, the overlay control-plane also needs to be IPv6 enabled. This is true in the case of Multi-Protocol BGP, especially with the EVPN address-family, peering, next-hop handling, and exchange of routes has been enabled for IPv6.

At this point, we have not configured a single IPv4 address for the purpose of routing or reachability, neither for the underlay nor for the overlay itself because IPv6 does the job well. Remaining numbering that leverages an IPv4 notation are the fields like Router-ID and Route Distinguisher. Even as these numbers look like IPv4 addresses, they are only identifiers that could be of any combination of numbers.

Capabilities


VXLANv6 and vPC: Connecting Servers Redundantly 

Once the VTEPs are running VXLANv6, the next step is to connect servers redundantly. VPC is the answer. The vPC Peer Keepalive has been elevated to employ IPv6, either on the management interface or via the front panel ports. With VXLAN and vPC, we used the concept of Anycast to share the same VTEP IP address between both vPC members. While in secondary IP addresses are used in IPv4, in IPv6 all the addresses on a given interface are of equal priority. This little detail led us to expand the VTEPs source-interface command to allow the selection of the loopback for the Primary IP (PIP) and the loopback for the Virtual IP (VIP) separately.

There is no IPv4 address configured for the purpose of routing or reachability.With vPC you’re good to go.

IPv4 and VXLANv6: Transporting IPv4 and IPv6 payloads

At this point we probably have some Applications or Services that require IPv4. WithVXLANv6, you can transport not only IPv6, but also IPv4 payloads. The Distributed IP Anycast Gateway (DAG) that provides the integrated routing and bridging (IRB) function of EVPN is supported for IPv4, IPv6, and dual-stacked endpoints residing in the overlay networks. Seamless host-mobility and Multi-Tenant IP Subnet routing is also supported, along with the counterpart VXLAN deployment running over an IPv4 transport network (VXLANv4). Cisco also supports Layer-2 transport over VXLANv6. Broadcast, Unknown Unicast, and Multicast (BUM) is handled through Ingress-Replication (aka Head-End Replication).

With IPv4, IPv6 or both payloads in VXLANv6, we have to somehow make the associated endpoints reachable to the rest of the world. The Border node has the capability to terminate VXLANv6 encapsulated traffic, whereas the decapsulated payload is sent via Sub-Interfaces with per-VRF peering (aka inter-AS Option A) to the External Router. Again, no IPv4 addressing in the infrastructure necessary.

What’s next for VXLANv6?


Overlays went a long way to support IPv6 migrations. Even so, underlays are predominantly deployed with IPv4 addressing. VXLANv6 changes the landscape and allows a consistent IPv6 approach, in the underlay, in the overlay, or wherever you need it.

VXLANv6 is enabled for individual VTEPs, vPC VTEPs, Spines with BGP Route-Reflector, and in the role as a Border node. In the near future, VXLANv6 will use PIMv6 for BUM replication in the underlay and subsequently Tenant Routed Multicast (TRM) over VXLANv6 will become a reality. And, VXLANv6 will be enabled on the Border Gateway (BGW), where our Multi-Site architecture can be used with a complete IPv6 only infrastructure, with new DCNM functionality enabling support for all these newer functionalities for all NX-OS devices.

Saturday 20 April 2019

Change is the only constant – vPC with Fabric Peering for VXLAN EVPN

Optimize Usage of Available Interfaces, Bandwidth, Connectivity


Dual-homing for endpoints is a common requirement, and many Multi-Chassis Link Aggregation (MC-LAG) solutions were built to address this need. Within the Cisco Nexus portfolio, the virtual Port-Channel (vPC) architecture addressed this need from the very early days of NX-OS. With VXLAN, vPC was enhanced to accommodate the needs for dual-homed endpoints in network overlays.

With EVPN becoming the de-facto standard control-plane for VXLAN, additions to vPC for VXLAN BGP EVPN were required. While the problem space of End-Point Multi-Homing changes, vPC for VXLAN BGP EVPN changes and faces the new requirements and use-cases. The latest innovation in vPC optimizes the usage of the available interfaces, bandwidth and overall connectivity – vPC with Fabric Peering removes the need for dedicating a physical Peer Link and changes how MC-LAG is done. VPC with Fabric Peering is shipping in NX-OS 9.2(3).

Active-Active Forwarding Paths in Layer 2, Default Gateway to Endpoints


At Cisco, we continually innovate on our data center fabric technologies, iterating from traditional Spanning-Tree to virtual Port-Channel (vPC), and from Fabric Path to VXLAN.

Traditional vPC moved infrastructures past the limitations of Spanning-Tree and allow an endpoint to connect to two different physical Cisco Nexus switches using a single logical interface – a virtual Port-Channel interface. Cisco vPC offers an active-active forwarding path not only for Layer 2 but also inherits this paradigm for the first-hop gateway function, providing active-active default gateway to the endpoints. Because of the merged existence of two Cisco Nexus switches, Spanning-Tree does not see any loops, leaving all links active.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

vPC for VXLAN BGP EVPN


When vPC was expanded to support VXLAN and VXLAN BGP EVPN environments, Anycast VTEP was added. Anycast VTEP is a shared logical entity, represented with a Virtual IP address, across the two vPC member switches. With this minor increment, the vPC behavior itself hasn’t changed. Anycast VTEP integrates the vPC technology into the new technology paradigm of routed networks and overlays. Such an adjustment had been done previously within FabricPath. In that situation, a Virtual Switch ID was used – another approach for a common shared virtual entity represented to the network side.

While vPC to was enhanced to accommodate different network architectures and protocols, the operational workflow for customers remained the same. As a result, vPC was widely adopted within the industry.

With VXLAN BGP EVPN being a combined Layer 2 and Layer 3 network, where both host and prefix routing exists, the need for MAC, IP and prefix state information is required – in short, the exchange of routing information next to MAC and ARP/ND. To relax a hard routing table and the sync between vPC member, a selective condition for routing advertisement was introduced, “advertise-pip”. With the addition of “advertise-pip”, the selective advertisement of BGP EVPN prefix routes was changed and now advertised from the individual vPC member nodes and its Primary IP (PIP) instead of the shared Virtual IP (VIP). This had the result that unnecessary routing traffic was kept off the vPC Peer Link and instead derived directly to the correct vPC member node.

While many enhancements for convergence and traffic optimization went into vPC for VXLAN BGP EVPN, many implicit changes came with additional configuration accommodating the vPC Peer Link; at this point Cisco decided to change this paradigm of using a physical Peer Link.

The vPC Peer Link


The vPC Peer Link is the binding entity that pairs individual Switches into a vPC domain. This link is used to synchronize the two individual Switches and assists Layer 2 control-plane protocols, like BPDUs or LACP, as it would come from one single Node. In the cases where End-Points are Dual-Homed to both vPC member switches, the Peer Links sole purpose is to synchronize the state information as described before, but in cases of single-connected End-Points, so called Orphans, the vPC Peer Link can still potentially carry traffic.

With VXLAN BGP EVPN, the Peer Link was required to support additional duties and provided additional signalization when Multicast-based Underlays were used. Further, the vPC Peer Link was used as a backup routing instance in the case of an extended uplink failure towards the Spines or for the per-VRF routing information exchange for orphan networks.

With all these various requirements, it was a given requirement for making the vPC Peer Link resilient, with Cisco’s recommendation to have at least two or more physical interfaces dedicated for this role.

The aim to simplify topologies and the unique capability of the Cisco Nexus 9000 CloudScale ASICs led to the removal of the physical vPC Peer Link requirement. This freed at least two physical interfaces, increasing interface capacity by nearly 5%.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

vPC with Fabric Peering


While changes and adjustment to an existing architecture can always be made, sometimes a more dramatical shift has to be considered. When vPC with Fabric Peering was initially discussed, the removal of the physical vPC Peer Link was the objective but rapidly other improvements came to mind. As such, vPC with Fabric Peering follows a different forwarding paradigm by keeping the operational consistency for vPC intact. The following four sections cover the key architecture principals for vPC with Fabric Peering.

Keep existing vPC Features

As we enhanecd vPC with Fabric Peering, we wanted to ensure that existing features are not being affected. Special focus was added to ensure the availability of Border Leaf functionality with external routing peering, VXLAN OAM and Tenant Routed Multicast (TRM).

Benefits to your Network Design

Every interface has a cost and so every Gigabyte counts. By relaxing the physical vPC Peer Link, we not only achieve architecture fidelity but also return interface and optical cost as well as optimizing the available bandwidth.

Leveraging Leaf/Spine topologies and respective N-way Spines, the available path between any 2 Leafs becomes ECMP and as such, a potential candidate for the vPC Fabric Peering. With all Spines now sharing VXLAN BGP EVPN Leaf to Leaf or East-to-West communication and vPC Fabric Peering, the overall use of provisioned bandwidth becomes more optimized. Given that all links are shared, the increased resiliency for the vPC Peer Link is equal to the resiliency of Leaf to Spine connectivity. This is a significant increase compared to the two physical direct links between two vPC members.

With the infrastructure between the vPC members now shared, the proper classification of vPC Peer Link vs. general fabric payload has to be considered. In foresight of this, the vPC Fabric Peering has the ability to be classified with a high DSCP marking to ensure in-time delivery.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Overview: vPC with Fabric Peering

Another important cornerstone of vPC was the Peer Keep Alive functionality. vPC with Fabric Peering keeps the important failsafe functions in place but relaxes the requirement of using a separate physical link. The vPC Peer Keep Alive can now be over the Spine infrastructure in parallel to the virtual Peer Link. As an alternative and to increase the resiliency, the vPC Peer Keep Alive can still be deployed over the out-of-band management network or any other routed network of choice between the vPC member nodes.

In addition to the vPC Peer Keep Alive, the tracking of the uplinks towards the Spines has been introduced to more deterministically understand the topology. As such the uplink tracking will create a dependency on the vPC primary function and respectively switch the operational primary role depending on the vPC members availability in the fabric.

Focus on individual VTEP behavior

The primary use-case for vPC has always been for dual-homed End-Points. However, with this approach, single attached End-Points (orphans) were treated like 2nd class citizen where the vPC Peer Link allowed reachability.

When vPC with Fabric Peering was designed, unnecessary traffic over the “virtual” Peer Link should be avoided by any means and also the need for per-VRF peering over the same.

With this decision, orphan End-Points become a 1st class citizen similar as dual-homed End-Points are and the exchange of routing information should be done through BGP EVPN instead of per-VRF peering.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Traffic Flow Optimization for vPC and Orphan Host

When using vPC with Fabric Peering, orphan End-Points and networks connected to individual vPC member are advertised from the VTEPs Primary IP address aka PIP; in vPC with physical Peer Link it would always use the Virtual IP (VIP). With the PIP approach, the forwarding decision from and to this orphan End-Point/network will be resolved as part of the BGP EVPN control-plane and forwarded with VXLAN data-plane. The forwarding paradigm of these orphan End/Point/network is the same as it would be with an individual VTEP; the dependency on the vPC Peer Link has been removed. As an additional benefit, consistent forwarding is archived for orphan End-Point/Network connected to an individual VTEP or a vPC domain with Fabric Peering. You could consider that vPC member node existing in vPC with Fabric Peering behaves primarily as an individual VTEP or “always-PIP” for orphan MAC/IP or IP Prefixes.

vPC where vPC is needed

With the paradigm shift to primarily operate an individual vPC member node as a standalone VTEP, the dual-homing functionality has to only be given to specific attachment circuits. As such, the functionality of vPC only comes into play when the vPC keyword has been used on the attachment circuit. In the case for vPC attachment, the End-Point advertisement would be originated with the Virtual IP Address (VIP) of the Anycast VTEP. Leveraging this shared VIP, routed redundancy from the fabric side is achieved with extremely fast ECMP failover times.

In the case of traditional vPC, the vPC Peer Link was also used during failure cases of an End-Points dual attachment. As the advertisement of a previous dual-attached End-Point doesn’t change from VIP to PIP during failures, the need for a Peer Link equivalent function is required. In the case traffic follows the VIP and get hashed towards the wrong vPC member node, the one with the failed link, the respective vPC member node will bounce the traffic the other vPC member.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Traffic redirected in vPC failure cases

vPC with Fabric Peering is shipping as per NX-OS 9.2(3)

Benefits


These enhancements have been delivered without impacting existing vPC features and functionality in lock-step with the same scale and sub-second convergence as existing vPC deployments achieved.

While the addition of new features and functions is simple, having an easy migration path is fundamental to deployment. Knowing this, the impact considerations for upgrades, side grades or migration remains paramount – and changing from vPC Peer Link to vPC Fabric Peering can be easily performed.

vPC with Fabric Peering was primarily designed for VXLAN BGP EVPN networks and is shipping in NX-OS 9.2(3). Even so, this architecture can be equally applied to most vPC environment, as long as routed Leaf/Spine topology exists.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials