Showing posts with label Cisco 8000 Series. Show all posts
Showing posts with label Cisco 8000 Series. Show all posts

Monday, 14 December 2020

Cisco SD-WAN Integration with AWS Transit Gateway Connect Raises the Bar for Cloud Performance and Scale

As the SD-WAN enterprise customers increase their consumption of business-critical applications from cloud or directly as SaaS over the Internet, there is a growing need for on-demand SD-WAN extension to the cloud or SaaS of choice.

Cisco has partnered with AWS, to deliver Cisco SD-WAN Cloud OnRamp to extend our SD-WAN fabric to AWS workloads.

As our customers transition their workloads to AWS, Cisco continues to build on this partnership to accelerate our customer’s SD-WAN journey to AWS.

In our current integrated solution between Cisco SD-WAN and AWS Transit Gateway, Cisco SD-WAN Cloud OnRamp enables users to connect to their AWS workloads using the Cisco SD-WAN controller(vManage). The Cloud OnRamp feature automates Cisco SD-WAN fabric extension from branch routers to Amazon VPCs. In addition, the integration with TGW Network Manager enables seamless network visibility either through vManage or AWS console. This provides a comprehensive view of the on-premises network, including the WAN, and the customer’s AWS network. All underlying tasks such as spinning up Cisco SD-WAN cloud routers, such as Catalyst 8000V Edge Software, creating Transit VPC, and establishing IPsec VPN tunnels to AWS TGW and forming BGP adjacency are completely automated. In addition, customers can extend network segmentation policies from on-premises to AWS Cloud via a simple-to-use GUI in Cloud OnRamp.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Study Material

The existing solution with Cloud OnRamp automates the entire orchestration of the TGW and VPC networking, hence reducing the time-consuming manual task to a matter of minutes.

We have integrated further with AWS on our current solution, for customers requiring throughputs in excess of the 1.25 Gbps that is possible today with an IPsec tunnel connection, and preferring not to manage establishing multiple tunnels to scale bandwidth beyond 1.25Gbps. While some other customers have security/compliance considerations and need to establish private IP addresses along the entire path from branch to AWS.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Study Material

In response to our customer requirements, we are excited to announce our latest integration of Cisco SD-WAN Cloud OnRamp with AWS Transit Gateway Connect.

This latest offering with AWS Transit Gateway Connect, builds upon our existing AWS relationship to provide a tightly integrated solution with additional key benefits, like:

1. Reduced costs with higher bandwidth connections: The new integration between Cisco and AWS uses native GRE tunnels instead of IPsec tunnels, offering up to 4 times the bandwidth and eliminating the challenges and costs of establishing and maintaining a multitude of IPsec tunnels.

2. Enhanced security: By removing the need for public IP addresses, customers with strict security requirements can deploy the solution using private IP addresses to significantly reduce the attack surface reducing risk and streamlining compliance.

3. Increased route limit: This new architecture will increase the number of BGP network advertised routes many-fold over the existing 100 route limit. **

4. Increased visibility: Integration with Transit Gateway Network Manager will provide an increased level of visibility such as performance metrics and telemetry data not only from the third-party appliances but also from the branch appliances sitting behind them. This allows customers to monitor end-to-end network across AWS and on-premises.

Tuesday, 12 May 2020

Running Cisco Catalyst C9800-CL Wireless Controller in Google Cloud Platform

When I heard that the Cisco Catalyst 9800 Wireless Controller for Cloud was supported as an IaaS solution on Google Cloud with Cisco IOS-XE version 16.12.1, I wanted to give it a try.

Built from the ground-up for Intent-based networking and Cisco DNA, Cisco Catalyst 9800 Series Wireless Controllers are Cisco IOS® XE based, integrate the RF excellence of Cisco Aironet® access points, and are built on the three pillars of network excellence: Always on, Secure and Deployed anywhere (on-premises, private or public Cloud).

I had a Cisco Catalyst 9300 Series switch and a Wi-Fi 6 Cisco Catalyst 9117 access point with me. I had internet connectivity of course, and that should be enough to reach the Cloud, right?

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I was basically about to build the best-in-class wireless test possible, with the best Wi-Fi 6 Access Point in the market (AP9117AX), connected to the best LAN switching technology (Catalyst 9300 Series switch with mGig/802.3bz and UPOE/802.3bt), controlled by the best Wireless LAN Controller (C9800-CL) running the best Operating System (Cisco IOS-XE), and deployed in what I consider the best public Cloud platform (GCP).

Let me show you how simple and great it was!

(NOTE: Please refer to Deployment Guide and Release Notes for further details. This blog does not pretend to be a guide but rather to share my experience, how to quickly test it and highlight some of the aspects in the process that excited me the most)

The only supported deployment mode is with a managed VPN between your premises and Google Cloud (as shown in previous picture). For simplification and testing purposes, I just used public IP address of cloud instance to build my setup.

Virtual Private Cloud or VPC

GCP creates a ‘default’ VPC that we could have used for simplicity, but I rather preferred (and it is recommended) to create my specific VPC (mywlan-network1) for this lab under this specific (C9800-iosxe-gcp) project.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I did also select the region closest to me (europe-west1) and did select a specific IP address range 192.168.50/24 for GCP to automatically select an internal IP address for my Wireless LAN Controller (WLC) and a default-gateway in that subnet (custom-subnet-eu-w1).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

A very interesting feature in GCP is that network routing is built in; you don’t have to provision or manage a router. GCP does it for you. As you can see, for mywlan-network1, a default-gateway is configured in 192.168.50.0/24 (named default-route-c091ac9a979376ce for internal DNS resolution) and a default-gateway to internet (0.0.0.0/0). Every region in GCP has a default subnet assigned making all this process even more simple and automated if needed.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Firewall Rules 

Another thing you don’t have to provision and that GCP manages for you: a firewall. VPCs give you a global distributed firewall you can control to restrict access to instances, both incoming and outgoing traffic. By default, all ingress traffic (incoming) is blocked. To connect to the C9800-CL instance once it is up and running, we need to allow SSH and HTTP/HTTPS communication by adding the ingress firewall rules. We will also allow ICMP, very useful for quick IP reachability checking.

We will also allow CAPWAP traffic from AP to join the WLC (UDP 5246-5247).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You can define firewall rules in terms of metadata tags on Compute Engine instances, which is really convenient.

As you can see, these are ACLs based on Targets with specific tags, meaning that I don’t need to base my access-lists on complex IP addresses but rather on tags that identify both sources and destinations. In this case, we can see that I permit http, https, icmp or CAPWAP to all targets or just to specific targets, very similar to what we do with Cisco TrustSec and SGTs. In my case, the C9800-CL belongs to all those tags, so I´m basically allowing all mentioned protocols needed.

Launching the Cisco Catalyst C9800-CL image on Google Cloud 

Launching a Cisco Catalyst 9800 occurs directly from the Google Cloud Platform Marketplace. Cisco Catalyst 9800 will be deployed on a Google Compute Engine (GCE) Instance (VM).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You then prepare for the deployment through a wizard that will ask you for parameters like hostname, credentials, zone to deploy, scale of your instance, networking parameters, etc. Really easy and intuitive.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

And GCP will magically deploy the system for you!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

(External IP is ephemeral, so not a big deal and will only be used during this test while instance is running).

MJIMENA-M-M0KF:~ mjimena$ ping 35.189.203.140

PING 35.189.203.140 (35.189.203.140): 56 data bytes

64 bytes from 35.189.203.140: icmp_seq=0 ttl=247 time=33.608 ms

64 bytes from 35.189.203.140: icmp_seq=1 ttl=247 time=31.220 ms

I have IP reachability. Let me try to open a web browser and…………I’m in!!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

After some initial GUI setup parameters, my C9800-CL is ready. With a WLAN (SSID) configured but with no Access Point registered yet.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I access C9800 CLI with SSH (remember the firewall rule we configured in GCP).

MJIMENA-M-M0KF:~ mjimena$ ssh admin@35.189.203.140

The authenticity of host '35.189.203.140 (35.189.203.140)' can't be established.

RSA key fingerprint is SHA256:HI10434rnGdfQyHjxBA92ywdkib6nBYG6jykNRTddXg.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '35.189.203.140' (RSA) to the list of known hosts.

Password:

c9800-cl#

Let’s double check the version we are running:

c9800-cl#show ver | sec Version

Cisco IOS XE Software, Version 16.12.01

Cisco IOS Software [Gibraltar], C9800-CL Software (C9800-CL-K9_IOSXE), Version 16.12.1, RELEASE SOFTWARE (fc4)

Any neighbor there in the Cloud?

c9800-cl#show cdp neighbors

Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
                  S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone,
                  D - Remote, C - CVTA, M - Two-port Mac Relay

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

Total cdp entries displayed : 0

Ok, makes sense…

The C9800 has a public IP associated to its internal IP address. We need to configure the controller to reply to AP joins with the public IP and not the private one. For that, type the following global configuration command, all on one line:

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 35.189.203.140

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 3

And indeed, no AP yet.

c9800-cl#show ap summary

Number of APs: 0

c9800-cl#

Let’s plug that Cisco AP9117AX!

I connect a brand new Cisco AP9117ax to a mGIG/ UPOE port in a Cisco Catalyst 9300 switch at 5Gbps over copper.

I connect to console and type the following command to prime the AP to the GCP C9800-cl instance:

AP0CD0.F894.16BC#capwap ap primary-base c9800-cl 35.189.203.140

wireless management interface GigabitEthernet1 nat public-ip 3

This CLI resets connection with WLC to accelerate the join process.

AP0CD0.F894.16BC#capwap ap restart

I check reachability between AP at home and my WLC in GCP.

AP0CD0.F894.16BC#ping 35.189.203.140

Sending 5, 100-byte ICMP Echos to 35.189.203.140, timeout is 2 seconds

!!!!!

The Cisco AP9117AX is joining and downloading IOS-XE image.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

My GUI is now showing the AP downloading the right image before joining.

My setup is done!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Monday, 11 May 2020

Cisco goes SONiC on Cisco 8000

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Since its introduction by Microsoft and OCP in 2016, SONiC has gained momentum as the open-source operating system of choice for cloud-scale data center networks. The Switch Abstraction Interface (SAI) has been instrumental in adapting SONiC to a variety of underlying hardware. SAI provides a consistent interface to ASIC, allowing networking vendors to rapidly enable SONiC on their platforms while innovating in the areas of silicon and optics via vendor-specific extensions. This enables cloud-scale providers to have a common operational model while benefiting from innovations in the hardware. The following figure illustrates a high-level overview of the platform components that map SONIC to a switch.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC has traditionally been supported on a single NPU system with one instance of BGP, SwSS (Switch State Service), and Synced container. It has been recently extended to support multiple NPUs in a system. This is accomplished by running multiple instances of BGP, Syncd, and other relevant containers, one per NPU instance.

SONiC on Cisco 8000


As part of Cisco’s continued collaboration with the OCP community, and following up on support for SONiC on Nexus platforms, Cisco now supports SONiC on fixed and modular Cisco 8000 Series routers. While the support for SONiC on fixed, single NPU systems is an incremental step, bringing in another cisco ASIC and platform under SONiC/SAI, support for SONiC on a modular platform marks a significant milestone in adapting modular routing systems to support SONiC in a fully distributed way. In the rest of the blog, we will look at the details of the chassis-based router and how SONiC is implemented on Cisco 8000 modular systems.

Cisco 8000 modular system architecture


Let’s start by looking deeper into a Cisco 8000 modular system. A modular system has the following key components – 1) One or two Router Processors 2) Multiple Line Cards 3) Multiple Fabric cards 4) Chassis commons such as FANs, Power Supply Units, etc. The following figure illustrates the RP, LC, and FC components, along with their connectivity.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

The NPUs on the line cards and the fabric cards within a chassis are connected in a CLOS network. The NPUs on each line card are managed by the CPU on the corresponding line card and the NPUs on all the fabric cards are managed by the CPU(s) on the RP cards. The line card and fabric NPUs are connected over the backplane. All the nodes (LC, RP) are connected to the external world via an Ethernet switch network within the chassis.

This structure logically represents a single layer leaf-spine network where each of the leaf and spine nodes are a multi-NPU system.

From a forwarding standpoint, the Cisco 8000 modular system works as a single forwarding element with the following functions split among the line card and fabric NPUs:

◉ Ingress line card NPU performs functions such as tunnel termination, packet forwarding lookups, multi-stage ECMP load balancing, and ingress features such as QoS, ACL, inbound mirroring, and so on. Packets are then forwarded towards the appropriate egress line card NPU using a virtual output queue (VOQ) that represents the outgoing interface, by encapsulating the packet in a fabric header and an NPU header. Packets are sprayed across the links towards the fabric to achieve a packet-by-packet load balancing.

◉ Fabric NPU processes the incoming fabric header and sends the packet over one of the links towards the egress line card NPU.

◉ Egress LC NPU processes the incoming packet from the fabric using the information in the NPU header to perform the egress functions on the packet such as packet encapsulation, priority markings, and egress features such as QoS, ACL and so on.

In a single NPU fixed system, the ingress and egress functions described above are all performed in the same NPU as the fabric NPU functionality obviously doesn’t exist.

SONiC on Cisco 8000 modular systems


The internal CLOS enables the principles of leaf-spine SONiC design to be implemented in the Cisco 8000 modular system. The following figure shows a SONiC based leaf-spine network:

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each node in this leaf-spine network runs an independent instance of SONiC. The leaf and spine nodes are connected over standard Ethernet ports and support Ethernet/IP based forwarding within the network. Standard monitoring and troubleshooting techniques such as filters, mirroring, traps can also be employed in this network at leaf and spine layers. This is illustrated in the figure below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each line card runs an instance of SONiC on the line card CPU, managing the NPUs on that line card. One instance of SONiC runs on the RP CPU, managing all the NPUs on the fabric cards. The line card SONiC instances represent the leaf nodes and the RP SONiC instance represents the spine node in a leaf-spine topology.

The out-of-band Ethernet network within the chassis provides external connectivity to manage each of the SONiC instances.

Leaf-Spine Datapath Connectivity

This is where the key difference between a leaf-spine network and the leaf-spine connectivity within a chassis comes up. As discussed above, a leaf-spine network enables Ethernet/IP based packet forwarding between them. This allows for standard monitoring and troubleshooting tools to be used on the spine node as well as on the leaf-spine links.

Traditional forwarding within a chassis is based on fabric implementation using proprietary headers between line cards and fabric NPUs. In cell-based fabrics, the packet is further split into fixed or variable sized cells and sprayed across the available fabric links. While this model allows the most optimal link utilization, it doesn’t allow standards-based monitoring and troubleshooting tools to be used to manage the intra-chassis traffic.

Cisco Silicon One ASIC has a unique ability to enable Ethernet/IP based packet forwarding within the chassis as it can be configured in either network mode or fabric mode. As a result, we use the same ASIC on the line cards and fabric cards by configuring the interfaces between the line card and fabric in fabric mode while the network-facing interfaces on the line card are configured in network mode.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

This ASIC capability is used to implement the leaf-spine topology within Cisco 8000 chassis by configuring the line card – fabric links in network mode, as illustrated below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC on the line cards exchange routes using a per NPU BGP instance that peers with each other. SONiC on each line card thus runs one instance of BGP per NPU on the line card, which is typically a small number (low single digits). On the other hand, RP SONiC manages a larger number of fabric NPUs. To optimize the design, fabric NPUs are instead configured in a point-to-point cross-connect mode providing virtual pipe connectivity among every pair of line card NPUs. This cross-connect can be implemented using VLANs or other similar techniques.

Packets across the fabric are still exchanged as Ethernet frames enabling monitoring tools such as mirroring, sFlow, etc., to be enabled on the fabric NPUs thus providing end-to-end visibility of network traffic, including the intra-chassis flows.

For the use cases that need fabric-based packet forwarding within the chassis, the line card – fabric links can be reconfigured to operate in fabric mode, allowing the same hardware to cater to a variety of use cases.