Tuesday 12 May 2020

Running Cisco Catalyst C9800-CL Wireless Controller in Google Cloud Platform

When I heard that the Cisco Catalyst 9800 Wireless Controller for Cloud was supported as an IaaS solution on Google Cloud with Cisco IOS-XE version 16.12.1, I wanted to give it a try.

Built from the ground-up for Intent-based networking and Cisco DNA, Cisco Catalyst 9800 Series Wireless Controllers are Cisco IOS® XE based, integrate the RF excellence of Cisco Aironet® access points, and are built on the three pillars of network excellence: Always on, Secure and Deployed anywhere (on-premises, private or public Cloud).

I had a Cisco Catalyst 9300 Series switch and a Wi-Fi 6 Cisco Catalyst 9117 access point with me. I had internet connectivity of course, and that should be enough to reach the Cloud, right?

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I was basically about to build the best-in-class wireless test possible, with the best Wi-Fi 6 Access Point in the market (AP9117AX), connected to the best LAN switching technology (Catalyst 9300 Series switch with mGig/802.3bz and UPOE/802.3bt), controlled by the best Wireless LAN Controller (C9800-CL) running the best Operating System (Cisco IOS-XE), and deployed in what I consider the best public Cloud platform (GCP).

Let me show you how simple and great it was!

(NOTE: Please refer to Deployment Guide and Release Notes for further details. This blog does not pretend to be a guide but rather to share my experience, how to quickly test it and highlight some of the aspects in the process that excited me the most)

The only supported deployment mode is with a managed VPN between your premises and Google Cloud (as shown in previous picture). For simplification and testing purposes, I just used public IP address of cloud instance to build my setup.

Virtual Private Cloud or VPC

GCP creates a ‘default’ VPC that we could have used for simplicity, but I rather preferred (and it is recommended) to create my specific VPC (mywlan-network1) for this lab under this specific (C9800-iosxe-gcp) project.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I did also select the region closest to me (europe-west1) and did select a specific IP address range 192.168.50/24 for GCP to automatically select an internal IP address for my Wireless LAN Controller (WLC) and a default-gateway in that subnet (custom-subnet-eu-w1).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

A very interesting feature in GCP is that network routing is built in; you don’t have to provision or manage a router. GCP does it for you. As you can see, for mywlan-network1, a default-gateway is configured in 192.168.50.0/24 (named default-route-c091ac9a979376ce for internal DNS resolution) and a default-gateway to internet (0.0.0.0/0). Every region in GCP has a default subnet assigned making all this process even more simple and automated if needed.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Firewall Rules 

Another thing you don’t have to provision and that GCP manages for you: a firewall. VPCs give you a global distributed firewall you can control to restrict access to instances, both incoming and outgoing traffic. By default, all ingress traffic (incoming) is blocked. To connect to the C9800-CL instance once it is up and running, we need to allow SSH and HTTP/HTTPS communication by adding the ingress firewall rules. We will also allow ICMP, very useful for quick IP reachability checking.

We will also allow CAPWAP traffic from AP to join the WLC (UDP 5246-5247).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You can define firewall rules in terms of metadata tags on Compute Engine instances, which is really convenient.

As you can see, these are ACLs based on Targets with specific tags, meaning that I don’t need to base my access-lists on complex IP addresses but rather on tags that identify both sources and destinations. In this case, we can see that I permit http, https, icmp or CAPWAP to all targets or just to specific targets, very similar to what we do with Cisco TrustSec and SGTs. In my case, the C9800-CL belongs to all those tags, so I´m basically allowing all mentioned protocols needed.

Launching the Cisco Catalyst C9800-CL image on Google Cloud 

Launching a Cisco Catalyst 9800 occurs directly from the Google Cloud Platform Marketplace. Cisco Catalyst 9800 will be deployed on a Google Compute Engine (GCE) Instance (VM).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You then prepare for the deployment through a wizard that will ask you for parameters like hostname, credentials, zone to deploy, scale of your instance, networking parameters, etc. Really easy and intuitive.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

And GCP will magically deploy the system for you!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

(External IP is ephemeral, so not a big deal and will only be used during this test while instance is running).

MJIMENA-M-M0KF:~ mjimena$ ping 35.189.203.140

PING 35.189.203.140 (35.189.203.140): 56 data bytes

64 bytes from 35.189.203.140: icmp_seq=0 ttl=247 time=33.608 ms

64 bytes from 35.189.203.140: icmp_seq=1 ttl=247 time=31.220 ms

I have IP reachability. Let me try to open a web browser and…………I’m in!!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

After some initial GUI setup parameters, my C9800-CL is ready. With a WLAN (SSID) configured but with no Access Point registered yet.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I access C9800 CLI with SSH (remember the firewall rule we configured in GCP).

MJIMENA-M-M0KF:~ mjimena$ ssh admin@35.189.203.140

The authenticity of host '35.189.203.140 (35.189.203.140)' can't be established.

RSA key fingerprint is SHA256:HI10434rnGdfQyHjxBA92ywdkib6nBYG6jykNRTddXg.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '35.189.203.140' (RSA) to the list of known hosts.

Password:

c9800-cl#

Let’s double check the version we are running:

c9800-cl#show ver | sec Version

Cisco IOS XE Software, Version 16.12.01

Cisco IOS Software [Gibraltar], C9800-CL Software (C9800-CL-K9_IOSXE), Version 16.12.1, RELEASE SOFTWARE (fc4)

Any neighbor there in the Cloud?

c9800-cl#show cdp neighbors

Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
                  S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone,
                  D - Remote, C - CVTA, M - Two-port Mac Relay

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

Total cdp entries displayed : 0

Ok, makes sense…

The C9800 has a public IP associated to its internal IP address. We need to configure the controller to reply to AP joins with the public IP and not the private one. For that, type the following global configuration command, all on one line:

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 35.189.203.140

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 3

And indeed, no AP yet.

c9800-cl#show ap summary

Number of APs: 0

c9800-cl#

Let’s plug that Cisco AP9117AX!

I connect a brand new Cisco AP9117ax to a mGIG/ UPOE port in a Cisco Catalyst 9300 switch at 5Gbps over copper.

I connect to console and type the following command to prime the AP to the GCP C9800-cl instance:

AP0CD0.F894.16BC#capwap ap primary-base c9800-cl 35.189.203.140

wireless management interface GigabitEthernet1 nat public-ip 3

This CLI resets connection with WLC to accelerate the join process.

AP0CD0.F894.16BC#capwap ap restart

I check reachability between AP at home and my WLC in GCP.

AP0CD0.F894.16BC#ping 35.189.203.140

Sending 5, 100-byte ICMP Echos to 35.189.203.140, timeout is 2 seconds

!!!!!

The Cisco AP9117AX is joining and downloading IOS-XE image.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

My GUI is now showing the AP downloading the right image before joining.

My setup is done!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Monday 11 May 2020

Cisco goes SONiC on Cisco 8000

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Since its introduction by Microsoft and OCP in 2016, SONiC has gained momentum as the open-source operating system of choice for cloud-scale data center networks. The Switch Abstraction Interface (SAI) has been instrumental in adapting SONiC to a variety of underlying hardware. SAI provides a consistent interface to ASIC, allowing networking vendors to rapidly enable SONiC on their platforms while innovating in the areas of silicon and optics via vendor-specific extensions. This enables cloud-scale providers to have a common operational model while benefiting from innovations in the hardware. The following figure illustrates a high-level overview of the platform components that map SONIC to a switch.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC has traditionally been supported on a single NPU system with one instance of BGP, SwSS (Switch State Service), and Synced container. It has been recently extended to support multiple NPUs in a system. This is accomplished by running multiple instances of BGP, Syncd, and other relevant containers, one per NPU instance.

SONiC on Cisco 8000


As part of Cisco’s continued collaboration with the OCP community, and following up on support for SONiC on Nexus platforms, Cisco now supports SONiC on fixed and modular Cisco 8000 Series routers. While the support for SONiC on fixed, single NPU systems is an incremental step, bringing in another cisco ASIC and platform under SONiC/SAI, support for SONiC on a modular platform marks a significant milestone in adapting modular routing systems to support SONiC in a fully distributed way. In the rest of the blog, we will look at the details of the chassis-based router and how SONiC is implemented on Cisco 8000 modular systems.

Cisco 8000 modular system architecture


Let’s start by looking deeper into a Cisco 8000 modular system. A modular system has the following key components – 1) One or two Router Processors 2) Multiple Line Cards 3) Multiple Fabric cards 4) Chassis commons such as FANs, Power Supply Units, etc. The following figure illustrates the RP, LC, and FC components, along with their connectivity.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

The NPUs on the line cards and the fabric cards within a chassis are connected in a CLOS network. The NPUs on each line card are managed by the CPU on the corresponding line card and the NPUs on all the fabric cards are managed by the CPU(s) on the RP cards. The line card and fabric NPUs are connected over the backplane. All the nodes (LC, RP) are connected to the external world via an Ethernet switch network within the chassis.

This structure logically represents a single layer leaf-spine network where each of the leaf and spine nodes are a multi-NPU system.

From a forwarding standpoint, the Cisco 8000 modular system works as a single forwarding element with the following functions split among the line card and fabric NPUs:

◉ Ingress line card NPU performs functions such as tunnel termination, packet forwarding lookups, multi-stage ECMP load balancing, and ingress features such as QoS, ACL, inbound mirroring, and so on. Packets are then forwarded towards the appropriate egress line card NPU using a virtual output queue (VOQ) that represents the outgoing interface, by encapsulating the packet in a fabric header and an NPU header. Packets are sprayed across the links towards the fabric to achieve a packet-by-packet load balancing.

◉ Fabric NPU processes the incoming fabric header and sends the packet over one of the links towards the egress line card NPU.

◉ Egress LC NPU processes the incoming packet from the fabric using the information in the NPU header to perform the egress functions on the packet such as packet encapsulation, priority markings, and egress features such as QoS, ACL and so on.

In a single NPU fixed system, the ingress and egress functions described above are all performed in the same NPU as the fabric NPU functionality obviously doesn’t exist.

SONiC on Cisco 8000 modular systems


The internal CLOS enables the principles of leaf-spine SONiC design to be implemented in the Cisco 8000 modular system. The following figure shows a SONiC based leaf-spine network:

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each node in this leaf-spine network runs an independent instance of SONiC. The leaf and spine nodes are connected over standard Ethernet ports and support Ethernet/IP based forwarding within the network. Standard monitoring and troubleshooting techniques such as filters, mirroring, traps can also be employed in this network at leaf and spine layers. This is illustrated in the figure below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each line card runs an instance of SONiC on the line card CPU, managing the NPUs on that line card. One instance of SONiC runs on the RP CPU, managing all the NPUs on the fabric cards. The line card SONiC instances represent the leaf nodes and the RP SONiC instance represents the spine node in a leaf-spine topology.

The out-of-band Ethernet network within the chassis provides external connectivity to manage each of the SONiC instances.

Leaf-Spine Datapath Connectivity

This is where the key difference between a leaf-spine network and the leaf-spine connectivity within a chassis comes up. As discussed above, a leaf-spine network enables Ethernet/IP based packet forwarding between them. This allows for standard monitoring and troubleshooting tools to be used on the spine node as well as on the leaf-spine links.

Traditional forwarding within a chassis is based on fabric implementation using proprietary headers between line cards and fabric NPUs. In cell-based fabrics, the packet is further split into fixed or variable sized cells and sprayed across the available fabric links. While this model allows the most optimal link utilization, it doesn’t allow standards-based monitoring and troubleshooting tools to be used to manage the intra-chassis traffic.

Cisco Silicon One ASIC has a unique ability to enable Ethernet/IP based packet forwarding within the chassis as it can be configured in either network mode or fabric mode. As a result, we use the same ASIC on the line cards and fabric cards by configuring the interfaces between the line card and fabric in fabric mode while the network-facing interfaces on the line card are configured in network mode.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

This ASIC capability is used to implement the leaf-spine topology within Cisco 8000 chassis by configuring the line card – fabric links in network mode, as illustrated below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC on the line cards exchange routes using a per NPU BGP instance that peers with each other. SONiC on each line card thus runs one instance of BGP per NPU on the line card, which is typically a small number (low single digits). On the other hand, RP SONiC manages a larger number of fabric NPUs. To optimize the design, fabric NPUs are instead configured in a point-to-point cross-connect mode providing virtual pipe connectivity among every pair of line card NPUs. This cross-connect can be implemented using VLANs or other similar techniques.

Packets across the fabric are still exchanged as Ethernet frames enabling monitoring tools such as mirroring, sFlow, etc., to be enabled on the fabric NPUs thus providing end-to-end visibility of network traffic, including the intra-chassis flows.

For the use cases that need fabric-based packet forwarding within the chassis, the line card – fabric links can be reconfigured to operate in fabric mode, allowing the same hardware to cater to a variety of use cases.

Sunday 10 May 2020

The four-step journey to securing the industrial network

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Certifications

Just as the digitization and increasing connectivity of business processes has enlarged the attack surface of the IT environment, so too has the digitization and increasing connectivity of industrial processes broadened the attack surface for industrial control networks. Though they share this security risk profile, the operational technology (OT) environment is very different from that of IT. This post looks at the key differences and provides a four-step approach to securing the industrial network.

In industries like utilities, manufacturing, and transportation, the operations side of the business is revenue generating. As a result, uptime is critical. While uptime is important in IT, interdependencies in the OT environment make it challenging to maintain uptime while addressing security threats. For example, you can’t simply isolate an endpoint that’s sending anomalous traffic. Because of the interdependencies of that endpoint, isolating it can have a cascading effect that brings a critical business process to a grinding halt. Or, worse, human lives may be put at risk. It’s important to understand the context of security events so that they can be addressed while maintaining uptime.

With uptime requirements in mind, securing the industrial network can feel like an insurmountable challenge. Many industrial organizations don’t have visibility into all of the devices that are on their OT networks, let alone the dependencies among them. Devices have been added over time, often by third-party contractors, and an asset inventory is either non-existent or grossly outdated. Bottom line: organizations lack visibility into the operational technology environment.

To help industrial organizations address these challenges and effectively secure the OT environment, we’ve put together a four-step journey to securing the industrial network. It’s important to note that while we call it a journey, there is no defined beginning or end. It’s an iterative process that requires continual adjustments. The most important thing is to start wherever you happen to be today.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Certifications
There are many places from which to begin, and what makes a logical first step for one organization will not necessarily be the same for another. One approach is to start with gaining visibility through asset discovery. By analyzing network traffic, deep packet inspection (DPI) can identify the industrial assets connected to your network. With this visibility, you can make an informed decision on the best way to segment the network to limit the spread of an attack.

In addition to identifying assets, DPI identifies which assets are communicating, with whom or what they are communicating, and what they are communicating. With this baseline established, you can detect anomalous behavior and potential threats that may threaten process integrity. This information can then be fed into a unified security operations center (SOC), providing complete visibility to the security team.

How you deploy DPI is important. Embedding a DPI-enabled sensor on switches saves hardware costs and physical space, which can be at a premium, depending on the industry. DPI-enabled sensors allow you to inspect traffic without encountering deployment, scalability, bandwidth, or maintenance hurdles. Because switches see all network traffic, embedded sensors can provide the visibility you need to segment the network and detect threats early on. The solution can also integrate with the IT SOC while providing analytical insights into every component of the industrial control system. With DPI-enabled network switches, industrial organizations can more easily move through the four-step journey to securing the industrial network.

Saturday 9 May 2020

A Mindset Shift for Digitizing Software Development and Delivery

Cisco Tutorial and Material, Cisco Study Material, Cisco Learning, Cisco Exam Prep

At Cisco, my teams—which are part of the Intent-Based Networking Group—focus on the core network layers that are used by enterprise, data center, and service provider network engineering. We develop tools and processes that digitize and automate the Cisco Software Development Lifecycle (CSDL). We have been travelling the digitization journey for over two years now and are seeing significant benefits. This post will explain why we are working diligently and creatively to digitize software development across the spectrum of Cisco solutions, some of our innovations, and where we are headed next.

Why Cisco Customers Should Care About Digitization of Software Development and Delivery


Cisco customers should consider what digitization of software development means to them. Because many of our customers are also software developers—whether they are creating applications to sell or for internal digital transformation projects—the same principles we are applying to Cisco development can be of use to a broader audience.

Digitization of development improves total customer experience by moving beyond just the technical aspects of development and thinking in terms of complete solutions that include accurate and timely documentation, implementation examples, and analytics that recommend which release is best for a particular organization’s network. Digitization of development:

◉ Leads to improvements in the quality, serviceability, and security of solutions in the field.

◉ Delivers predictive analytics to assist customers to understand, for example, the impact an upgrade, security patches, or new functionality will have on existing systems, with increased assurance about how the network will perform after changes are applied. 

◉ Automates the documentation of each handoff along the development lifecycle to improve traceability from concept and design to coding and testing.

These capabilities will be increasingly important as we continue to focus on developing solutions for software subscriptions, which shift the emphasis from long cycles creating feature-filled releases to shorter development cycles delivering new functionality and customer-requested innovations in accelerated timeframes.

Software Developers Thrive with Digital Development Workflows


For professionals who build software solutions, the digitization of software development focuses on improving productivity, consistency, and efficiency. It democratizes team-based development—that is, everyone is a developer: solution architects, designers, coders, and testers. Teams are configured to bring the appropriate expertise to every stage of solution development. Test developers, for example, should not only develop test plans and specific tests, but also provide functional specifications and code reviews, build test automation frameworks, and represent customer views for validating solutions at every stage of development. Case in point, when customer-specific uses cases are incorporated early into the architecture and design phases, then the functionality of the intended features are built into test suites as code is being written.

A primary focus of digitization of development is creating new toolsets for measuring progress and eliminating friction points. Our home-grown Qualex (Quality Index) platform provides an automated method of measuring and interpreting quality metrics for digitized processes. The goal is to eliminate human bias by using data-driven techniques and self-learning mechanisms. In the past 2 years, Qualex has standardized most of our internal development practices and is saving the engineering organization a considerable amount of time and expense for software management.

­Labs as a Service (LaaS) is another example of applying digitization to transform the development cycle that also helps to efficiently manage CAPEX. Within Cisco, LaaS is a ready-to-use environment for sharing networking hardware, spinning up virtual routers, and providing on-demand testbed provisioning. Developers can quickly and cost effectively design and setup hardware and software environments to simulate various customer use cases, including public and private cloud implementations.

Digitization Reduces Development Workflow Frictions


A major goal of the digitization of software development is to reduce the friction points during solution development. We are accomplishing this by applying AI and machine learning against extensive data lakes of code, documentation, customer requests, bug reports, and previous test cycle results. The resulting contextual analytics will be available via a dashboard at every stage of the development process, reducing the friction of multi-phase development processes. This will make it possible for every developer to have a scorecard that tracks technical debt, security holes, serviceability, and quality. The real-time feedback increases performance and augments skillsets, leading to greater developer satisfaction.

Cisco Tutorial and Material, Cisco Study Material, Cisco Learning, Cisco Exam Prep

Workflow friction points inhibit both creativity and productivity. Using analytics to pinpoint aberrations in code as it is being developed reduces the back and forth cycles of pinpointing flaws and reproducing them for remediation. Imagine a developer writing new code for a solution which includes historical code. The developer is not initially familiar with the process or the tests that the inherited code went through. With contextual analytics presenting relevant historical data, the developer can quickly come up to speed and avoid previous mistakes in the coding process. We call this defect foreshadowing. The result is cleaner code produced in less time, reduced testing cycles, and better integration of new features with existing code base.

Digitizing Development Influences Training and Hiring

Enabling a solution view of a project—rather than narrow silos of tasks—also expands creativity and enhances opportunities to learn and upskill, opening career paths. The cross-pollination of expertise makes everyone involved in solution development more knowledgeable and more responsive to changes in customer requirements. In turn everyone gains a more satisfying work experience and a chance to expand their career.

◈ Training becomes continuous learning by breaking down the silos of the development lifecycle so that individuals can work across phases and be exposed to all aspects of the development process.

◈ Automating tracking and analysis of development progress and mistakes enables teams to pinpoint areas in which people need retraining or upskilling.

◈ Enhancing the ability to hire the right talent gets a boost from digitization as data is continuously gathered and analyzed to pinpoint the skillsets that contribute the most to the successful completion of projects, thus refining the focus on the search for talent.

Join Our Journey to Transform Software Development


At Cisco we have the responsibility of carrying the massive technical debt created since the Internet was born while continuously adding new functionality for distributed data centers, multi-cloud connectivity, software-defined WANs, ubiquitous wireless connectivity, and security. To manage this workload, we are fundamentally changing how Cisco builds and tests software to develop products at web-scale speeds. These tools, which shape our work as we shape them, provide the ability to make newly-trained and veteran engineers capable of consistently producing extraordinary results.

Cisco is transforming the solution conception to development to consumption journey. We have made significant progress, but there is still much to accomplish. We invite you to join us on this exciting transformation. As a Cisco Network Engineer, you have the opportunity to create innovative solutions using transformative toolsets that make work exciting and rewarding as you help build the future of the internet. As a Cisco DevX Engineer, you can choose to focus on enhancing the evolving toolset with development analytics and hyper-efficient workflows that enable your co-developers to do their very best work. Whichever path you choose, you’ll be an integral member of an exclusive team dedicated to customer success.

Friday 8 May 2020

Simplifying the DevOps and NetOps Journey using Cisco SD-WAN Cloud Hub with Google Cloud

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco SD-WAN

Cisco and Google Cloud have partnered to bridge cloud applications and enterprise networks by creating the new Cisco SD-WAN Cloud Hub with Google Cloud. This solution, built around Cisco SD-WAN technology and Google Cloud, simplifies the NetOps and DevOps journey by automating the allocation of SD-WAN network resources to meet application requirements.

Modern enterprise applications are composed of multiple services deployed across on-premise and cloud environments. NetOps and DevOps teams maintain the infrastructure that hosts, connects, and delivers these services. The goal of these teams: optimize the application experience. But due to the complexity of the infrastructure and the dynamism of application flows, this can be challenging. Each time a new application is deployed, the NetOps team must collect the application requirements from the DevOps team and render them into the appropriate network policy.

In this post we look at how Cisco SD-WAN Cloud Hub with Google Cloud simplifies workflow for DevOps and NetOps, by automating the tasks needed to deliver a better application experience.

Here we focus on the benefits brought by the solution in terms of automation.  There are some benefits that we don’t cover in this article but are also part of the solution, such as the improved security and segmentation, the enhanced multi-cloud operation, and how Cisco SD-WAN Cloud Hub can enable traffic steering through the Google Cloud backbone network.

How DevOps Will Use Cisco SD-WAN Cloud Hub with Google Cloud


DevOps teams are interested in what the network can do for their application, rather than in how it is done. From their perspective, the network is a component meant to support specific application demands. To that end, DevOps will focus on properly classifying services according to certain traffic profiles, for instance Video Streaming or VoIP. These profiles are agreed on beforehand with NetOps, and allow DevOps to express the networking needs of the services. DevOps leaves it to NetOps to best configure the network to handle each profile.

In our new solution, the DevOps team uses Google Cloud Service Directory to publish the traffic profile that best represents the network traffic generated by a given application. They can use different traffic profiles for different services, as needed. The integration of Service Directory with Google Cloud Identity and Access Management (IAM) ensures that only those in the DevOps team with the appropriate permissions can modify the traffic profile for a service.

For example, DevOps and NetOps may agree that the services can be classified according to four profiles: standard, data, streaming and conferencing. The DevOps then use Service Directory to associate the following metadata to each service deployed: “traffic: standard”, “traffic: data”, “traffic: streaming”, and “traffic: conferencing”. (The metadata used here is an example to illustrate the flexibility of the solution; different teams may define different profiles.)

Let’s say that the DevOps team is deploying two services with different networking needs. They want to make sure that the traffic for each is properly handled in the SD-WAN. One service is a heavy-load database backup application, with high bandwidth requirements, while the other is a screen sharing service, not only sensitive to latency but also to packet loss. Following the metadata convention agreed with the NetOps team, the DevOps team marks these services as “traffic: data” and “traffic: conferencing”, respectively.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco SD-WAN

How NetOps Leverages the Solution


The NetOps team, for its part, has a deep knowledge of the network and can efficiently optimize it to meet application requirements. The NetOps team uses Cisco vManage (Cisco’s centralized SD-WAN management platform) to program detailed network policies that map how each traffic profile should be rendered over the SD-WAN.

The NetOps team may decide to configure policies that specify that traffic with the profile “standard” should go through a best effort tunnel, that “data” should go through a high bandwidth tunnel, and that “streaming” and “conferencing” should go through low latency tunnels. The NetOps team can further fine tune the policies to specify that traffic for “conferencing” services should go, when possible, through highly-reliable links over the Google Cloud backbone to minimize packet loss.

Thanks to the integration with Service Directory, vManage can discover in real-time an application’s characteristics and its networking needs. vManage, by constantly monitoring Service Directory, acts whenever new relevant information becomes available. For instance, as soon as the database backup service is deployed, vManage automatically retrieves the associated metadata via Service Directory and dynamically renders the network policy defined by the NetOps team. In this example, the rendered network policies steer the database backup traffic through a high-bandwidth SD-WAN path. Likewise, a conferencing application, say the aforementioned screen-sharing service, would see its traffic steered to the Google Cloud backbone.

In this way the network automatically adapts, in real-time, not only to new applications, but also to changes in application requirements or changes in existing traffic profiles. The most relevant and effective network policies are always enforced and specifically tailored for each service. This greatly simplifies operations for both NetOps and DevOps teams, which now only need to make sure that the intended application profiles and network policies are in place. Service Directory and vManage coordinate to dynamically render the most effective network optimizations.

The integration of Cisco vManage and Google Cloud Service Directory leads to an improved application experience and more efficient use of SD-WAN resources.

Additional automatic traffic steering is also a part of the solution, thanks to the automatic aggregation of data about applications, obtained via Service Directory, correlated with vManage’s detailed, real-time view of network infrastructure. For instance, prior to this integration, minimal losses on a high-bandwidth link may not trigger special actions on regular SD-WAN operation. With this solution in place, vManage, knowing that there is traffic over this link that belongs to a “conferencing” application, might automatically steer that traffic through an alternate, non-lossy link. Similarly, knowing that “standard” applications are not sensitive to small losses, vManage can take advantage of the bandwidth just made available to automatically allocate flows for “standard” applications over the lossy link.

To conclude, Cisco SD-WAN Cloud Hub with Google Cloud leverages Cisco SD-WAN and Google Cloud Service Directory to simplify the journey of the NetOps and the DevOps teams, automating the allocation of SD-WAN network resources to match applications’ demands, optimizing the application experience. All without disrupting the continuous flow with which applications are developed, deployed and supported.

Thursday 7 May 2020

What’s new and exciting on Cisco ACI with Red Hat Ansible Collections

Introduction


As customers embrace the DevOps model to accelerate application deployment and achieve higher efficiency in operating their data centers, the infrastructure needs to change and respond faster than ever to business needs. DevOps can help you achieve an agile operational model by improving on automation, innovation, and consistency.  In this blog let us go on a quick journey of how Red Hat Ansible and Cisco ACI helps you address these challenges quickly and proficiently.

Ansible and Cisco ACI – The perfect pair that enables a true DevOps model


In many customer IT environments, network operations still remain entrenched in error-prone manual processes. Many of the earlier generation folks that were attracted to network operations didn’t want to be programmers, rather they were more interested in implementing and maintaining network policies using CLI and monolithic means on proprietary platforms. In recent times, best-practices in Server-side and DevOps practices have started influencing the networking world with Cloud Administrators forced to support both the compute and network resources. However, in many cases, entirely moving away from traditional network operations may not be possible, just as a 100% DevOps strategy may not be a good fit. The best strategy: The most with the least amount of change or energy. Automation is the natural solution here – it can make the most unproductive and repetitive tasks ideal candidates for automation.

Red Hat Ansible has fast emerged as one of the most popular platforms to automate these day-to-day manual tasks and bring unprecedented cost savings and operational efficiency. Cisco ACI’s Application Policy Infrastructure Controller (APIC) supports a robust and open API that Ansible can seamlessly leverage. Ansible is open source, works with many different operating systems that run on Cisco Networking platforms (ACI, IOS, NX-OS, IOS-XR), and supports the range of ACI offerings.

Together, Cisco ACI and Ansible provide a perfect combination enabling customers to embrace the DevOps model and accelerate ACI Deployment, Monitoring, day-to-day management, and more.

Cisco ACI – Red Hat Ansible solution


Ansible is the only solution in the market today to address network automation challenges, with its unified configuration, provisioning and application deployment, and creating favorable business outcomes like accelerated DevOps and a simplified IT environment.

Ansible brings lots of synergies to an ACI environment with its simple automation language, powerful features such as app-deployment, configuration. management and workflow orchestration and above all an agentless architecture that makes the execution environment predictable and secure.

In the latest Ansible release (2.9), there are over 100 ACI and Multisite modules in Ansible core. Modules for specific objects like, Tenant and Application Profiles as well as a module for interacting directly with the ACI REST API. This means that a broad set of ACI functionality is available as soon as you install Ansible. After installing Ansible only two things are required to start automating an ACI Network Fabric. First, an Ansible playbook, which is a set of automation instructions and two, the inventory file which lists the devices to be automated in this case an APIC. The playbooks are written in YAML to define the tasks to execute against an ACI fabric. Here is an ACI playbook sample that configures a Tenant on an APIC.

---

- name: ACI Tenant Management

  hosts: aci

  connection: local

  gather facts: no

  tasks:

  - name: CONFIGURE TENANT

    aci_tenant:

      hostname: "{{ hostname }}"

      username: admin

      password: adminpass

      validate_certs: false

      tenant: "{{ tenant_name }}"

      description: "{{ tenant_name }} created Using Ansible"

      state: present

How Ansible-ACI integration works?


The picture below represents users creating inventory files (for the APICs we want Ansible to manage), creating the playbooks (what tasks we want to run/automate on the target systems – the APICs), and leverage the available ACI modules for the tasks you want to configure/automate. Ansible then pushes those configuration tasks via the APIC REST API through HTTPS to the target system, the APIC.

Cisco ACI, Cisco Study Material, Cisco Learning, Cisco Certification, Cisco Exam Prep

The ACI Ansible modules help cover a broad set of Data center use cases. These include,

◉ Day 0 – Initial installation and deployment – Configuration of universal entities and policies, for example switch registration, naming, user configuration and firmware update.

◉ Day 1 – Configuration and Operation – Initial Tenant creation, along with all the Tenant child configurations, for example VRF, AP, BDs, EPGs, etc.

◉ Day 2 – Additional Configuration and Optimization – Add/Update/Remove Policies, Tenants, Applications, for example add a contract to support a new protocol in an existing EPG.

Key Benefits of ACI-Ansible solution


◉ Enables Admins to align on a unified approach to managing ACI the same way they manage other Data Center and Cloud infrastructure.

◉ ACI Ansible modules provide broad coverage for many ACI objects

◉ ACI Ansible modules are idempotent ensuring that playbook results are always the same

◉ ACI Ansible modules extend the trusted secure interaction of the ACI CLI and GUI.

◉ No Programming Skills required with Ansible module.

Wednesday 6 May 2020

Expanding the Internet for the Future Supporting First Responders and Society at Large

As social distancing measures continue, daily necessities such as maintaining a livelihood, accessing education, or obtaining critical services are being forced online. My wife and I are seeing this unfold personally as we work from home and attempt to help our 7- and 13-year-old navigate distance learning.

In our “new normal,” our consumption of online services is growing. Internet access is becoming increasingly vital to our health, safety, economic, and societal survival. And it’s not just us. Heroes and first responders, hospitals, schools, governments, workers, businesses, and our society-at-large are relying on the internet more than ever.

The more our society remains apart, the more we all need to be connected.

Service Providers Play an Important Role


With more people working from home, more children distance learning, and more parents seeking to keep their families entertained, global internet traffic has reached a new threshold. At Cisco, we’re seeing this firsthand.

Following stay-at-home mandates, traffic at major public peering exchanges increased 24% in Asia-Pacific, 20% in Europe, and 18.5% in the Americas. Here is a more specific breakdown by country:

Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Guides

Our service provider customers and partners have been doing a great job to manage the spikes in network traffic and load balance the shift in ‘peak’ online hours accordingly. They are vital to helping people stay safe and healthy, keeping them connected to their families, providing them access to important services, and supporting their jobs and education.

Service Provider Roundtable


Earlier this week, I hosted a virtual press and industry analyst roundtable with some leading providers of connectivity, social networking, and telehealth services.  The panel included:

◉ Jason Porter, SVP, AT&T FirstNet

◉ Kevin Hart, EVP/ Chief Product and Technology Officer, Cox Communications

◉ Dan Rabinovitsj, VP Connectivity, Facebook

◉ Andrés Irlando, SVP/President, Public Sector and Verizon Connect at Verizon

◉ Todd Leach, VP/CIO University of Texas, Galveston Medical Branch

◉ Mike King, MS, CHCIO Director University of Texas, Galveston Medical Branch

During the one-hour event, we explored how these big companies are supporting healthcare providers and first responders during this global pandemic. We also talked about critical infrastructure and how it’s driving changes in tele-health developed by the University of Texas, Galveston. Here are a few highlights from our panelists as they shared what’s happening on their networks:

Todd Leach, University of Texas Galveston Medical Branch: “We were dealing with critical patients while caring for the rest of the population. We had to scramble pretty quickly to transition over to telehealth. I can’t imagine what we would have done without having this technology.”

Kevin Hart, Cox: “Over the last two months, we’ve had a 15%-20% increase in traffic to our downstream network, and a 35%-40% increase in our upstream traffic… The peak usage window has moved from 9:00 p.m. on weekends to 2:00 – 3:00 p.m. during the weekday.”

Dan Rabinovitsj, Facebook: “People use our platform to stay connected. Messaging on all of our platforms is up 50%. In some of our markets, we’ve seen 1000% increases in video calling, video messaging—unprecedented usage.”

Jason Porter, AT&T FirstNet: “COVID was the perfect test case for our response, and we proved a nation-wide public/private network was there for first-responders the whole way.”

Andres Irlando, Verizon Connect at Verizon: “It’s the first time we activated our Verizon emergency response team across the country, everything from mobile testing sites, to pop-up hospitals, emergency operations centers, quarantine sites… you name it. By and large, the macro network has performed very well during this crisis.”

Digital Divide


As the importance of the internet shifts from huge to massive, the pandemic is shining a spotlight on the realities of the digital divide—we’re seeing large gaps between developed and developing countries, as well as urban and rural areas, for example.

Despite the growing transition to digital and remote services, 3.8 billion people around the world still remain unconnected and underserved with lack of critical access to information, healthcare and education.

At Cisco, we believe connectivity is critical to create a society and economy in which all citizens can participate and thrive.

◉ Only 35% of the population in developing countries has internet access, versus 80% in advanced economies.

◉ Bringing the internet to those currently without it would lift 500 million people out of poverty and add $6.7 trillion to the global economy.

◉ Approximately 23% of adults internationally do not know how to use the internet.

In these challenging times, the internet is more critical than ever. Businesses, governments, and institutions realize the need to invest in the networks connecting them to their customers, constituents, patients, and students. For some, that may require increased funding, government incentives, and cooperation across industries.

As we discussed on the panel, we all believe it will take the work of new and ongoing partnerships with strong commitment to make the internet more ubiquitous. As Dan at Facebook said, “No one company can do this alone.” And as Todd at UTMB put it best, “Just because it is hard, doesn’t mean we shouldn’t do it.” We are all in.

Source: cisco.com