Virtual Extensible LAN (also known as VXLAN) is a network virtualization technology that attempts to address the scalability problems associated with large cloud computing deployments. With the recent launch of Cisco’s VXLANv6, we’ve added the the Cisco overlay, and run it over an IPv6 transport network (underlay). Not only is our VXLANv6 fully capable of transporting IPv6, it can also handle IPv4 payloads, an important distinction as many application and services still require IPv4.
In the near future, VXLANv6 will allow a consistent IPv6 approach, both in the underlay as well as the overlay. With the newly shipped Cisco NX-OS 9.3(1) release that delivers VXLANv6, our customers can take advantage of this new exciting technology today.
In this blog we are going to talk about
◈ A brief overview of VXLANv6
◈ Expansibility and Investment Protection with VXLANv6
◈ IPv4 and IPv6 Coexistence
◈ Where are we going with VXLANv6
Many years ago whenI was struggling to get my modem working, I remember reading that an IETF draft for Internet Protocol version 6 (IPv6) had been filed. At that point of time, the reality of IPv6 was so far away we talked about retirement before we even considered widespread adoption. But as it always is in tech, everything comes around much sooner than one anticipates. While IPv6 had a difficult start, it’s now become a table stakes requirement for Applications and Services.
With Network Virtualization, it became easy to tunnel both IPv6 and IPv4 over the top of networks built with IPv4. In these traditional IPv4-Overlay cases, the Tunnel Endpoint (TEP) as well as the transport network (Underlay) reside in the IPv4 address space. The Applications and Services exist in a different addressing space (Overlay), which could be IPv4, IPv6 or Dual-Stack enabled; v4v6-over-v4 is a common theme these days. In the last few years, VXLAN has become a defacto standard for an overlay as it is employed both as a network-based overlay as well as a host-based overlay. VXLAN as the data plane, together with BGP EVPN as a control-plane, has become the prime choice of deployment for the new-age spine-leaf based data centers.
With the expansion of network virtualization using virtual machine and container workloads, infrastructure resources like IP addresses have to be reserved not only for the applications and services, but also for the infrastructure components itself. As a result, overlap of the IP address space is often seen between the underlay and overlay, given the exhaustion in the uniqueness of RFC1918 addresses.
Reason 1: One of the most difficult scenarios for overlapping address space is when it comes to network operations, trouble-shooting, and monitoring. The IP addresses used for the management and monitoring of the infrastructure are often required to be unique across the different devices. Also, the IP subnets for the management and monitoring stations have the same requirement, and, there should be no overlap between management and managed devices.The alternative is network address translation (NAT).
Reason 2: The exhaustion of unique IP addresses is just one of many cases that drives us towards IPv6. Other use-cases include government regulation, compliancy demands, or simple ease of infrastructure IP addressing. While we were reviewing the use-cases around IPv6 infrastructure addressing together with the current install base of technology and devices, one simple solution became obvious – VXLAN over an IPv6 underlay or in short VXLANv6.
Reason 3: VXLANv6 allows us to use a well-known overlay technology, namely VXLAN, and run it over an IPv6 transport network (Underlay). In the case of VXLANv6, the VXLAN Tunnel Endpoints (VTEPs) are addressed with a global IPv6 address associated with a loopback interface. The reachability of the VTEPs is achieved by using either IPv6 Link-Local or IPv6 global addressing along with an IPv6 capable routing protocol like IS-IS, OSPFv3 or BGP. Considering the option of using IPv6 Link-Local addressing, the subnet calculation and address assignment can be optimized and the underlay setup duration can be significantly reduced.
In addition to the VTEP and underlay topology and reachability, the overlay control-plane also needs to be IPv6 enabled. This is true in the case of Multi-Protocol BGP, especially with the EVPN address-family, peering, next-hop handling, and exchange of routes has been enabled for IPv6.
At this point, we have not configured a single IPv4 address for the purpose of routing or reachability, neither for the underlay nor for the overlay itself because IPv6 does the job well. Remaining numbering that leverages an IPv4 notation are the fields like Router-ID and Route Distinguisher. Even as these numbers look like IPv4 addresses, they are only identifiers that could be of any combination of numbers.
VXLANv6 and vPC: Connecting Servers Redundantly
Once the VTEPs are running VXLANv6, the next step is to connect servers redundantly. VPC is the answer. The vPC Peer Keepalive has been elevated to employ IPv6, either on the management interface or via the front panel ports. With VXLAN and vPC, we used the concept of Anycast to share the same VTEP IP address between both vPC members. While in secondary IP addresses are used in IPv4, in IPv6 all the addresses on a given interface are of equal priority. This little detail led us to expand the VTEPs source-interface command to allow the selection of the loopback for the Primary IP (PIP) and the loopback for the Virtual IP (VIP) separately.
There is no IPv4 address configured for the purpose of routing or reachability.With vPC you’re good to go.
IPv4 and VXLANv6: Transporting IPv4 and IPv6 payloads
At this point we probably have some Applications or Services that require IPv4. WithVXLANv6, you can transport not only IPv6, but also IPv4 payloads. The Distributed IP Anycast Gateway (DAG) that provides the integrated routing and bridging (IRB) function of EVPN is supported for IPv4, IPv6, and dual-stacked endpoints residing in the overlay networks. Seamless host-mobility and Multi-Tenant IP Subnet routing is also supported, along with the counterpart VXLAN deployment running over an IPv4 transport network (VXLANv4). Cisco also supports Layer-2 transport over VXLANv6. Broadcast, Unknown Unicast, and Multicast (BUM) is handled through Ingress-Replication (aka Head-End Replication).
With IPv4, IPv6 or both payloads in VXLANv6, we have to somehow make the associated endpoints reachable to the rest of the world. The Border node has the capability to terminate VXLANv6 encapsulated traffic, whereas the decapsulated payload is sent via Sub-Interfaces with per-VRF peering (aka inter-AS Option A) to the External Router. Again, no IPv4 addressing in the infrastructure necessary.
Overlays went a long way to support IPv6 migrations. Even so, underlays are predominantly deployed with IPv4 addressing. VXLANv6 changes the landscape and allows a consistent IPv6 approach, in the underlay, in the overlay, or wherever you need it.
VXLANv6 is enabled for individual VTEPs, vPC VTEPs, Spines with BGP Route-Reflector, and in the role as a Border node. In the near future, VXLANv6 will use PIMv6 for BUM replication in the underlay and subsequently Tenant Routed Multicast (TRM) over VXLANv6 will become a reality. And, VXLANv6 will be enabled on the Border Gateway (BGW), where our Multi-Site architecture can be used with a complete IPv6 only infrastructure, with new DCNM functionality enabling support for all these newer functionalities for all NX-OS devices.
In the near future, VXLANv6 will allow a consistent IPv6 approach, both in the underlay as well as the overlay. With the newly shipped Cisco NX-OS 9.3(1) release that delivers VXLANv6, our customers can take advantage of this new exciting technology today.
In this blog we are going to talk about
◈ A brief overview of VXLANv6
◈ Expansibility and Investment Protection with VXLANv6
◈ IPv4 and IPv6 Coexistence
◈ Where are we going with VXLANv6
With Network Virtualization, it became easy to tunnel both IPv6 and IPv4 over the top of networks built with IPv4. In these traditional IPv4-Overlay cases, the Tunnel Endpoint (TEP) as well as the transport network (Underlay) reside in the IPv4 address space. The Applications and Services exist in a different addressing space (Overlay), which could be IPv4, IPv6 or Dual-Stack enabled; v4v6-over-v4 is a common theme these days. In the last few years, VXLAN has become a defacto standard for an overlay as it is employed both as a network-based overlay as well as a host-based overlay. VXLAN as the data plane, together with BGP EVPN as a control-plane, has become the prime choice of deployment for the new-age spine-leaf based data centers.
With the expansion of network virtualization using virtual machine and container workloads, infrastructure resources like IP addresses have to be reserved not only for the applications and services, but also for the infrastructure components itself. As a result, overlap of the IP address space is often seen between the underlay and overlay, given the exhaustion in the uniqueness of RFC1918 addresses.
Below are the top reasons you should care about VXLANv6
Reason 1: One of the most difficult scenarios for overlapping address space is when it comes to network operations, trouble-shooting, and monitoring. The IP addresses used for the management and monitoring of the infrastructure are often required to be unique across the different devices. Also, the IP subnets for the management and monitoring stations have the same requirement, and, there should be no overlap between management and managed devices.The alternative is network address translation (NAT).
Reason 2: The exhaustion of unique IP addresses is just one of many cases that drives us towards IPv6. Other use-cases include government regulation, compliancy demands, or simple ease of infrastructure IP addressing. While we were reviewing the use-cases around IPv6 infrastructure addressing together with the current install base of technology and devices, one simple solution became obvious – VXLAN over an IPv6 underlay or in short VXLANv6.
Reason 3: VXLANv6 allows us to use a well-known overlay technology, namely VXLAN, and run it over an IPv6 transport network (Underlay). In the case of VXLANv6, the VXLAN Tunnel Endpoints (VTEPs) are addressed with a global IPv6 address associated with a loopback interface. The reachability of the VTEPs is achieved by using either IPv6 Link-Local or IPv6 global addressing along with an IPv6 capable routing protocol like IS-IS, OSPFv3 or BGP. Considering the option of using IPv6 Link-Local addressing, the subnet calculation and address assignment can be optimized and the underlay setup duration can be significantly reduced.
In addition to the VTEP and underlay topology and reachability, the overlay control-plane also needs to be IPv6 enabled. This is true in the case of Multi-Protocol BGP, especially with the EVPN address-family, peering, next-hop handling, and exchange of routes has been enabled for IPv6.
At this point, we have not configured a single IPv4 address for the purpose of routing or reachability, neither for the underlay nor for the overlay itself because IPv6 does the job well. Remaining numbering that leverages an IPv4 notation are the fields like Router-ID and Route Distinguisher. Even as these numbers look like IPv4 addresses, they are only identifiers that could be of any combination of numbers.
Capabilities
VXLANv6 and vPC: Connecting Servers Redundantly
Once the VTEPs are running VXLANv6, the next step is to connect servers redundantly. VPC is the answer. The vPC Peer Keepalive has been elevated to employ IPv6, either on the management interface or via the front panel ports. With VXLAN and vPC, we used the concept of Anycast to share the same VTEP IP address between both vPC members. While in secondary IP addresses are used in IPv4, in IPv6 all the addresses on a given interface are of equal priority. This little detail led us to expand the VTEPs source-interface command to allow the selection of the loopback for the Primary IP (PIP) and the loopback for the Virtual IP (VIP) separately.
There is no IPv4 address configured for the purpose of routing or reachability.With vPC you’re good to go.
IPv4 and VXLANv6: Transporting IPv4 and IPv6 payloads
At this point we probably have some Applications or Services that require IPv4. WithVXLANv6, you can transport not only IPv6, but also IPv4 payloads. The Distributed IP Anycast Gateway (DAG) that provides the integrated routing and bridging (IRB) function of EVPN is supported for IPv4, IPv6, and dual-stacked endpoints residing in the overlay networks. Seamless host-mobility and Multi-Tenant IP Subnet routing is also supported, along with the counterpart VXLAN deployment running over an IPv4 transport network (VXLANv4). Cisco also supports Layer-2 transport over VXLANv6. Broadcast, Unknown Unicast, and Multicast (BUM) is handled through Ingress-Replication (aka Head-End Replication).
With IPv4, IPv6 or both payloads in VXLANv6, we have to somehow make the associated endpoints reachable to the rest of the world. The Border node has the capability to terminate VXLANv6 encapsulated traffic, whereas the decapsulated payload is sent via Sub-Interfaces with per-VRF peering (aka inter-AS Option A) to the External Router. Again, no IPv4 addressing in the infrastructure necessary.
What’s next for VXLANv6?
Overlays went a long way to support IPv6 migrations. Even so, underlays are predominantly deployed with IPv4 addressing. VXLANv6 changes the landscape and allows a consistent IPv6 approach, in the underlay, in the overlay, or wherever you need it.
VXLANv6 is enabled for individual VTEPs, vPC VTEPs, Spines with BGP Route-Reflector, and in the role as a Border node. In the near future, VXLANv6 will use PIMv6 for BUM replication in the underlay and subsequently Tenant Routed Multicast (TRM) over VXLANv6 will become a reality. And, VXLANv6 will be enabled on the Border Gateway (BGW), where our Multi-Site architecture can be used with a complete IPv6 only infrastructure, with new DCNM functionality enabling support for all these newer functionalities for all NX-OS devices.