Saturday, 20 April 2019

Change is the only constant – vPC with Fabric Peering for VXLAN EVPN

Optimize Usage of Available Interfaces, Bandwidth, Connectivity


Dual-homing for endpoints is a common requirement, and many Multi-Chassis Link Aggregation (MC-LAG) solutions were built to address this need. Within the Cisco Nexus portfolio, the virtual Port-Channel (vPC) architecture addressed this need from the very early days of NX-OS. With VXLAN, vPC was enhanced to accommodate the needs for dual-homed endpoints in network overlays.

With EVPN becoming the de-facto standard control-plane for VXLAN, additions to vPC for VXLAN BGP EVPN were required. While the problem space of End-Point Multi-Homing changes, vPC for VXLAN BGP EVPN changes and faces the new requirements and use-cases. The latest innovation in vPC optimizes the usage of the available interfaces, bandwidth and overall connectivity – vPC with Fabric Peering removes the need for dedicating a physical Peer Link and changes how MC-LAG is done. VPC with Fabric Peering is shipping in NX-OS 9.2(3).

Active-Active Forwarding Paths in Layer 2, Default Gateway to Endpoints


At Cisco, we continually innovate on our data center fabric technologies, iterating from traditional Spanning-Tree to virtual Port-Channel (vPC), and from Fabric Path to VXLAN.

Traditional vPC moved infrastructures past the limitations of Spanning-Tree and allow an endpoint to connect to two different physical Cisco Nexus switches using a single logical interface – a virtual Port-Channel interface. Cisco vPC offers an active-active forwarding path not only for Layer 2 but also inherits this paradigm for the first-hop gateway function, providing active-active default gateway to the endpoints. Because of the merged existence of two Cisco Nexus switches, Spanning-Tree does not see any loops, leaving all links active.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

vPC for VXLAN BGP EVPN


When vPC was expanded to support VXLAN and VXLAN BGP EVPN environments, Anycast VTEP was added. Anycast VTEP is a shared logical entity, represented with a Virtual IP address, across the two vPC member switches. With this minor increment, the vPC behavior itself hasn’t changed. Anycast VTEP integrates the vPC technology into the new technology paradigm of routed networks and overlays. Such an adjustment had been done previously within FabricPath. In that situation, a Virtual Switch ID was used – another approach for a common shared virtual entity represented to the network side.

While vPC to was enhanced to accommodate different network architectures and protocols, the operational workflow for customers remained the same. As a result, vPC was widely adopted within the industry.

With VXLAN BGP EVPN being a combined Layer 2 and Layer 3 network, where both host and prefix routing exists, the need for MAC, IP and prefix state information is required – in short, the exchange of routing information next to MAC and ARP/ND. To relax a hard routing table and the sync between vPC member, a selective condition for routing advertisement was introduced, “advertise-pip”. With the addition of “advertise-pip”, the selective advertisement of BGP EVPN prefix routes was changed and now advertised from the individual vPC member nodes and its Primary IP (PIP) instead of the shared Virtual IP (VIP). This had the result that unnecessary routing traffic was kept off the vPC Peer Link and instead derived directly to the correct vPC member node.

While many enhancements for convergence and traffic optimization went into vPC for VXLAN BGP EVPN, many implicit changes came with additional configuration accommodating the vPC Peer Link; at this point Cisco decided to change this paradigm of using a physical Peer Link.

The vPC Peer Link


The vPC Peer Link is the binding entity that pairs individual Switches into a vPC domain. This link is used to synchronize the two individual Switches and assists Layer 2 control-plane protocols, like BPDUs or LACP, as it would come from one single Node. In the cases where End-Points are Dual-Homed to both vPC member switches, the Peer Links sole purpose is to synchronize the state information as described before, but in cases of single-connected End-Points, so called Orphans, the vPC Peer Link can still potentially carry traffic.

With VXLAN BGP EVPN, the Peer Link was required to support additional duties and provided additional signalization when Multicast-based Underlays were used. Further, the vPC Peer Link was used as a backup routing instance in the case of an extended uplink failure towards the Spines or for the per-VRF routing information exchange for orphan networks.

With all these various requirements, it was a given requirement for making the vPC Peer Link resilient, with Cisco’s recommendation to have at least two or more physical interfaces dedicated for this role.

The aim to simplify topologies and the unique capability of the Cisco Nexus 9000 CloudScale ASICs led to the removal of the physical vPC Peer Link requirement. This freed at least two physical interfaces, increasing interface capacity by nearly 5%.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

vPC with Fabric Peering


While changes and adjustment to an existing architecture can always be made, sometimes a more dramatical shift has to be considered. When vPC with Fabric Peering was initially discussed, the removal of the physical vPC Peer Link was the objective but rapidly other improvements came to mind. As such, vPC with Fabric Peering follows a different forwarding paradigm by keeping the operational consistency for vPC intact. The following four sections cover the key architecture principals for vPC with Fabric Peering.

Keep existing vPC Features

As we enhanecd vPC with Fabric Peering, we wanted to ensure that existing features are not being affected. Special focus was added to ensure the availability of Border Leaf functionality with external routing peering, VXLAN OAM and Tenant Routed Multicast (TRM).

Benefits to your Network Design

Every interface has a cost and so every Gigabyte counts. By relaxing the physical vPC Peer Link, we not only achieve architecture fidelity but also return interface and optical cost as well as optimizing the available bandwidth.

Leveraging Leaf/Spine topologies and respective N-way Spines, the available path between any 2 Leafs becomes ECMP and as such, a potential candidate for the vPC Fabric Peering. With all Spines now sharing VXLAN BGP EVPN Leaf to Leaf or East-to-West communication and vPC Fabric Peering, the overall use of provisioned bandwidth becomes more optimized. Given that all links are shared, the increased resiliency for the vPC Peer Link is equal to the resiliency of Leaf to Spine connectivity. This is a significant increase compared to the two physical direct links between two vPC members.

With the infrastructure between the vPC members now shared, the proper classification of vPC Peer Link vs. general fabric payload has to be considered. In foresight of this, the vPC Fabric Peering has the ability to be classified with a high DSCP marking to ensure in-time delivery.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Overview: vPC with Fabric Peering

Another important cornerstone of vPC was the Peer Keep Alive functionality. vPC with Fabric Peering keeps the important failsafe functions in place but relaxes the requirement of using a separate physical link. The vPC Peer Keep Alive can now be over the Spine infrastructure in parallel to the virtual Peer Link. As an alternative and to increase the resiliency, the vPC Peer Keep Alive can still be deployed over the out-of-band management network or any other routed network of choice between the vPC member nodes.

In addition to the vPC Peer Keep Alive, the tracking of the uplinks towards the Spines has been introduced to more deterministically understand the topology. As such the uplink tracking will create a dependency on the vPC primary function and respectively switch the operational primary role depending on the vPC members availability in the fabric.

Focus on individual VTEP behavior

The primary use-case for vPC has always been for dual-homed End-Points. However, with this approach, single attached End-Points (orphans) were treated like 2nd class citizen where the vPC Peer Link allowed reachability.

When vPC with Fabric Peering was designed, unnecessary traffic over the “virtual” Peer Link should be avoided by any means and also the need for per-VRF peering over the same.

With this decision, orphan End-Points become a 1st class citizen similar as dual-homed End-Points are and the exchange of routing information should be done through BGP EVPN instead of per-VRF peering.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Traffic Flow Optimization for vPC and Orphan Host

When using vPC with Fabric Peering, orphan End-Points and networks connected to individual vPC member are advertised from the VTEPs Primary IP address aka PIP; in vPC with physical Peer Link it would always use the Virtual IP (VIP). With the PIP approach, the forwarding decision from and to this orphan End-Point/network will be resolved as part of the BGP EVPN control-plane and forwarded with VXLAN data-plane. The forwarding paradigm of these orphan End/Point/network is the same as it would be with an individual VTEP; the dependency on the vPC Peer Link has been removed. As an additional benefit, consistent forwarding is archived for orphan End-Point/Network connected to an individual VTEP or a vPC domain with Fabric Peering. You could consider that vPC member node existing in vPC with Fabric Peering behaves primarily as an individual VTEP or “always-PIP” for orphan MAC/IP or IP Prefixes.

vPC where vPC is needed

With the paradigm shift to primarily operate an individual vPC member node as a standalone VTEP, the dual-homing functionality has to only be given to specific attachment circuits. As such, the functionality of vPC only comes into play when the vPC keyword has been used on the attachment circuit. In the case for vPC attachment, the End-Point advertisement would be originated with the Virtual IP Address (VIP) of the Anycast VTEP. Leveraging this shared VIP, routed redundancy from the fabric side is achieved with extremely fast ECMP failover times.

In the case of traditional vPC, the vPC Peer Link was also used during failure cases of an End-Points dual attachment. As the advertisement of a previous dual-attached End-Point doesn’t change from VIP to PIP during failures, the need for a Peer Link equivalent function is required. In the case traffic follows the VIP and get hashed towards the wrong vPC member node, the one with the failed link, the respective vPC member node will bounce the traffic the other vPC member.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Traffic redirected in vPC failure cases

vPC with Fabric Peering is shipping as per NX-OS 9.2(3)

Benefits


These enhancements have been delivered without impacting existing vPC features and functionality in lock-step with the same scale and sub-second convergence as existing vPC deployments achieved.

While the addition of new features and functions is simple, having an easy migration path is fundamental to deployment. Knowing this, the impact considerations for upgrades, side grades or migration remains paramount – and changing from vPC Peer Link to vPC Fabric Peering can be easily performed.

vPC with Fabric Peering was primarily designed for VXLAN BGP EVPN networks and is shipping in NX-OS 9.2(3). Even so, this architecture can be equally applied to most vPC environment, as long as routed Leaf/Spine topology exists.

Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials

Friday, 19 April 2019

Using Tetration for Application Security and Policy Enforcement

Protecting assets requires a Defense in Depth approach


Protecting assets within the enterprise requires the network manager to adopt automated methods of implementing policy on endpoints. A defense in depth approach, applying a consistent policy to the traditional firewall as well as policy enforcement on the host, takes a systemic view of the network.

Value added resellers are increasingly helping customers deploy solutions comprised of vendor hardware and software, support for open source software, and developing code which integrates these components. A great example of this, and the subject of this blog, is World Wide Technology making its integration with Ansible Tower and Cisco Tetration available as open source through Cisco DevNet Code Exchange.

I started my career in programming and analysis and systems administration, transitioned to network engineering, and now the projects I find most interesting require a combination of all those skills. Network engineers must view the network in a much broader scope, a software system that generates telemetry, analyzes it, and uses automation to implement policy to secure applications and endpoints.

The theme for Networkers/CiscoLive 1995 was ‘Any to Any.’ However, we don’t live in that world today. Then, the focus was to enable communication between nodes. Today, the focus is to enable network segmentation, restricting communication between nodes for legitimate business purposes. Today we ask, “What’s on my network?” “What is it doing?” and “Should it be?”

A Zero Trust security model, in the data center and on the endpoint, is a common topic for discussion for our customers. The traditional perimeter security model is less effective as cyber security attacks simply bypass firewalls and attack internal assets with phishing exploits. The Tetration Network Policy Publisher is one means to automate policy creation.

Tetration Network Policy Publisher


Introduced in April 2018, the Tetration Network Policy Publisher is an advanced feature enabling third parties to subscribe to the same policy applied to servers by the Tetration enforcement agent.

The Tetration cluster runs a Kafka instance and publishes the enforcement policies to a message bus. Unlike other message bus technologies, Kafka clients explicitly ‘ask’ for messages, by subscribing to a Kafka topic. Access to the policy is provided by downloading the Kafka client certificates from the Tetration user interface.

The enforcement policy messages are encoded as Google Protocol Buffers (protobuf), an efficient and extensible format for exchanging structured data between hosts. While more complex for the programmer, protobufs are both efficient on the wire and CPU compared to JSON or XML.
This feature enables ‘defense in depth’. The enforcement policy can be converted to the appropriate network appliance configuration and implemented on firewalls, router access lists, and security enabled load balancers. Ansible is one means to automate pushing policy to network assets.

Ansible Tower by Red Hat


Ansible Tower is a web-based solution which makes Ansible Engine easier to use across teams within an IT enterprise. Tower includes a REST API and CLI for ease of integration with existing tools and processes. Ansible Engine is built on the open source Ansible project. Red Hat licenses and provides support services for Ansible Tower and Engine.

Ansible has evolved into the defacto automation solution for configuration management across a wide range of compute, storage, network, firewall and load balancer resources in the cloud and on-premise.

Ansible interface to Cisco Tetration Network Policy Publisher


This project provides an abstraction layer between the Tetration Network Policy Publisher and Ansible, allowing the network administrator to push enforcement policy to ‘all the things’, without directly accessing the Tetration Kafka message bus. The code is open source and is publicly available through Cisco DevNet Code Exchange, a curated set of repos that make it easy to discover and use code related to Cisco technologies. The project repository, ansible-tetration, includes an Ansible module that retrieves enforcement policy from Tetration and exposes it as variables to an Ansible playbook. Subsequent tasks within the playbook can apply the policy to the configuration of network devices.

This functionality provides value to network operations as policy is published periodically, in an easily consumed format. NetDevOps engineers can focus on implementing policy without the need to understand the complexity of creating the policy.

Figure 1 illustrates the components of this solution.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Figure 1- Ansible interface to Cisco Tetration Network Policy Publisher

Links to additional resources of this project are available on Code Exchange.

Simplify the Complex


Within IT operations, forming NetDevOps teams to integrate disparate systems is the model for high-performance organizations. While encouraging each team member to have general level of understanding of the organization’s goals, it is also important to include specialists in a technology, to leverage their deep experience in one area to increase the velocity of the team.
The project goal is to expose policy generated by Tetration in a simple format a network operator can use to enable defense in depth within their datacenter.

Figure 2 illustrates how the Python module tetration_network_policy, can be included in an Ansible playbook, retrieve a security policy and register the results as a variable.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Figure 2- Tetration Network Policy task

In this example, the variable tnp contains the Tetration Network Policy (inventory_filters, intents, tenant name and catch_all policy) for the requested Kafka topic. Subsequent tasks can reference these values and apply the security policy to network devices.

Figure 3 illustrates the contents of variable tnp, which will be used to generate access control lists on an ASA firewall.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Figure 3- Sample JSON formatted network policy

By exposing the policy to an Ansible playbook, the data can be easily reformatted to a traditional CLI configuration and applied to a firewall, load balancer or Cisco ACI fabric.

Use Case: Apply Policy to ASA Firewalls


Now that the network policy has been retrieved and loaded to a variable within the playbook, it can used to configure a network device. In this example, the target device is a Cisco ASA firewall.
By invoking the Ansible module asa_config, the network policy is used to create the appropriate CLI commands and apply them to one or more firewalls defined in the inventory.

Figure 4 illustrates the playbook syntax.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Figure 4 – Ansible module asa_config

Following the execution of the playbook, the sample JSON formatted network policy is present in the firewall configuration, as shown in Figure 5.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Figure 5 – ASA configuration

Note: Because SNMP is a well know port, the ASA CLI has substituted SNMP to reference port 161.

Thursday, 18 April 2019

Serverless in the Datacenter: FaaS on K8s at DevNet Create

If you’ve ever wanted to learn the fundamentals of serverless and get your hands dirty building a LAMP-like application, DevNet Create has a session for you. On both April 24 and April 25 from 11:45a to 12:30p, I’ll be running an exercise entitled “FONK: FaaS on K8s working examples.” During our 45 minutes together, we’ll build a serverless version of the Guestbook application, which is the “Hello World” of the Kubernetes (K8s) community. Only instead of using containers directly the way that the original Guestbook does, we’ll use a Function-as-a-Service (FaaS) runtime, an Object Storage service, and a NoSQL server all running on top of K8s.

What is FaaS on K8s?


Developers need some platform providing them with compute resources in digestible bites when designing applications. Simply put, a Function-as-a-Service (FaaS) runtime (such as AWS Lambda or Azure Functions) is to serverless application architecture as a container runtime is to a microservices architecture. A container runtime takes care of things like autoscaling, rolling updates, and name resolution of different services running within it. A FaaS runtime obscures details of the underlying container runtime that most use under the hood and provide developers with a cleaner experience that enables them to focus on their own business logic.

During the session, we’ll discuss the six most popular FaaS runtimes that run on top of K8s so that you can run serverless applications in your own datacenter instead of in the public cloud. The featured labs will let you get your hands on two of them: OpenFaaS and OpenWhisk.

The Environment We’ll Be Using


I’ll be spending my evening on April 23 using a DevNet Sandbox to set up the following environment for you:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

Each student will get a K8s cluster pre-configured with not only OpenWhisk and OpenFaaS runtimes but also an Object Storage service via Minio and a NoSQL server via MongoDB. An additional VM will be provided and preloaded with all the command line tools we’ll need to build an application as well. What does a web application look like when using this FONK design pattern?

Our End Goal: The FONK Guestbook


Instead of the traditional K8s Guestbook that uses three services and six persistent containers:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

we’ll instead use the FONK design pattern to build its serverless equivalent:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

Minio will host our static HTML and Javascript files. Upon being loaded into a browser, the Javascript will make REST API calls to the API gateway provided by our FaaS runtime to launch functions on demand. When loaded into memory as needed, those functions will perform read and write operations from and to our MongoDB. The Javascript will then alter our HTML in the browser to reflect the changes to our user.

Wednesday, 17 April 2019

Cryptographic Visibility: Quality Encryption at Your Fingertips

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

There is only one thing more important than the right answer, and that is asking the right question. I have spent my entire career doing security analytics and all of the useful analytics I have ever delivered to the market have been grounded in asking the right questions. With that in mind, I’d like to start this blog by asking you this:

How much of your digital business is transmitted in the clear versus encrypted and how would you assess the quality of that encryption?


I’ve spoken on Encrypted Traffic Analytics where most of the storyline is about detecting malicious traffic without having to perform decryption. However, that same telemetry used for malware detection can also be leveraged to answer those questions I posed to you above. With the release of Stealthwatch 7.0 and the new Cryptographic Audit App, I’m excited to take this opportunity to talk about features that can provide you near real-time visibility on the state of your network encryption.

Twenty years ago, you needed to be a network expert to bring up a cryptographic tunnel between endpoints or between networks. Today, most people don’t even know they are safely transmitting over strong cryptographic tunnels. Every time you type in https:// or the browser defaults to this, you make use of the Transport Layer Security (TLS) protocol, which makes your conversation safe and secure over even the most hostile networks. The ability to do something that once required deep knowledge and a thorough understanding of network architectures is now something most of us do on a daily basis without giving it so much as a second thought. It is truly amazing to me how far we have come since the birth of the Internet.

When network telemetry was first invented, the analytical outcomes were focused on questions like “Can host-A reach server-B?” Availability and network performance were the key objectives of the time, so telemetry like Netflow and IPFIX were the appropriate metadata needed to achieve these goals. Fast forward to today, where the digital business would also like to know “Is the connection between host-A and server-B secure?” In order to achieve this, Cisco had to innovate and developed an extension to NetFlow known as Encrypted Traffic Analytics telemetry, which Stealthwatch can analyze for you to give you the cryptographic visibility you need to govern your network security policies.


Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

While it might not be obvious at first glance, the Cisco routers and switches are delivering telemetry, shown in the Sample Cryptographic Audit above, reporting critical metadata like the version of Transport Layer Security (TLS) being used, the Cipher Suites, and Key Lengths, all of which show what is happening across your digital business. For those of you who fall under PCI Compliance, you now have a way to provide evidence that for your PCI assets, you are running TLS version 1.1 or higher, as running TLS 1.0 would be a violation as of June 2018. Better still, after you clean things up, you can also setup alerts anytime something within PCI scope violates this policy – how awesome is that!

Let’s face it, TLS is the new TCP (Transmission Control Protocol). Having been at this for more than 25 years, I celebrate the fact that more than 90% of all network traffic these days is safely secured by cryptography. New versions of TLS get better, faster, and stronger which is awesome. While we throw a party for this achievement, lets also make sure we have systems in place to help us be vigilant and verify that what we intend to have happen is actually happening.

So, when it comes to the question of “How much of your digital business is transmitted in the clear versus encrypted and how would you assess the quality of that encryption,” Stealthwatch can provide you with an answer. With Stealthwatch 7.0, when your network tells you all this useful metadata, you can have a minute-by-minute status on your network visibility along with telemetry reports on critical metadata which now includes cryptographic details!

Tuesday, 16 April 2019

Automate Device Provisioning with Cisco IOS XE Zero Touch Provisioning

When new hardware is ordered and it arrives on site, it’s an exciting time. New hardware! New software! … But new challenges too!  But the age-old challenge of getting new devices on the network doesn’t need to be one of them. Sitting in the lab pre-provisioning devices is no longer required if you’re using Cisco IOS XE, because of features like Cisco Network Plug-n-Play (PnP) and Zero Touch Provisioning (ZTP). PnP is the premium solution made possible with Cisco DNA Center, while Zero Touch Provisioning (ZTP) is for the do-it-yourself customers who don’t mind investing more time in configuring and maintaining the infrastructure required to bootstrap devices. IOS XE runs on the enterprise hardware and software platforms that includes Catalyst 9000 series of switches and wireless LAN controllers, and the ISR 1000 and 4000 series routers.

DHCP Configuration to enable Zero Touch Provisioning


ZTP works when the DHCP client on the IOS XE device gets a DHCP Offer that includes option 67. This options, also called the “bootfile name,” tells the device which file to load and from where it’s available. Lets look at a few examples of how we can configure this on either the ISC DHCP Server or on the Cisco IOS DHCP Server.

The configuration example for the Linux ISC DHCP dhcpd.conf is below:

subnet 192.168.69.0 netmask 255.255.255.0 {
option bootfile-name "http://192.168.69.1/ztp.py"; }

For the Cisco IOS DHCP Server:

ip dhcp pool ZTP_DEMO
 network 192.168.69.0 255.255.255.0
 option 67 ascii http://192.168.69.1/ztp.py

In these examples option 67 points to a HTTP URL and a Python script. When the device see’s the python file in the option 67 then it downloads and executes it during bootup when no other configuration is present and without any manual user intervention. Pretty neat!

Next let’s take a look at a packet capture, which shows interesting things.

You might notice that the Client Identifier (option 61) is not consistent between DHCP Discovers. This is not a mistake, or multiple devices, but actually the intended behavior. Since IOS XE 16.8 the Client Identifier alternates between the device serial number, and the MAC address of the management port. When the DHCP server issues the DHCP Offer in packet 8, the bootfile-name (option 67) is set to http://192.168.69.1/ztp.py – This is the Python script that will be downloaded and run once the device is ready.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

Once the device is ready?


Yes, a few things need to happen before the Python script can run. Specifically, the Guest Shell container must start up. This is a linux container that runs within the IOS XE platform. This container has limited access to the IOS XE subsystem. This means that if you decide to run a crypto-miner within Guest Shell (not recommended!) that the IOS XE device will still handle the routing and switching of packets without any problems as the resource allocations for Guest Shell are separate from those responsible for the core capabilities of the device.

The power of the CLI, now with the flexibility of Python


Guest Shell does have some nice feature, specifically the Python API. This API allows the Guest Shell to send commands to the IOS XE operating system. What kind of commands? Show commands are supported with the Python CLI module cli.cli. Show commands are great for displaying information about the device, but are limited when it comes to making device configuration changes. To really harness the power of the Python API, the cli.execute and cli.configure modules provide a great deal of flexibility when it comes to device configuration. We can interact with the device through Python using the traditional “configure terminal” (“conf t”) interface and even send exec commands as needed. All the power of the CLI, now with the flexibility of Python.

So our container has started, and it knows which file to access. Lets look at an example ztp.py script next.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Certifications

This simple Python script sets the Vlan1 interface IP, enables some AAA settings, turns on the NETCONF-YANG programmatic interface, and creates a user. After, it prints the output to the screen, so that if we have a console cable connected we get visual confirmation that the device has been set successfully.

This device has now been automatically configured using ZTP. It received an IP address on the management port from DHCP, the Python configuration file from a webserver via DHCP option 67, it started up Guest Shell container and configured remote access. We are now free to carry on with our day-to-day tasks as the device is online and in a known and managed state.

Saturday, 13 April 2019

ACI Anywhere Now Extending From On-Premises to AWS Cloud

Cisco is pleased to announce availability of a brand-new solution, Cisco Cloud ACI on AWS. This solution automates management of end-to-end connectivity and enforcement of consistent network security policies for applications running in on-prem data centers and AWS public cloud regions.

Decentralized Data Means Cloud Growth


Enterprises, large and small, are expanding to the cloud to build applications that engage their customers. And their developers and IT teams must manage their private and public cloud environments.

IDC expects spending on cloud IT infrastructure to grow at a five-year compound annual growth rate (CAGR) of 11.2%, reaching $82.9 billion in 2022, and accounting for 56.0% of total IT infrastructure spend. Public cloud data centers will account for 66.0% of this amount, growing at an 11.3% CAGR. Spending on private cloud infrastructure will grow at a CAGR of 12.0%*.

Due to this massive shift in the decentralization of data, increasing cloud acceptance, and move to hybrid environments, businesses need a network that can empower the data center to go securely anywhere. Innovation should only be limited by imagination, not technology. Cisco’s ACI Anywhere with Cloud ACI is the bridge.

Multicloud Doesn’t Need to Mean Complexity


As the adoption of multicloud strategies grow, the industry is demanding consistent policy, security, and visibility everywhere, with a simplified operating model. IT organizations are challenged to maintain governance, compliance, agility, flexibility, and TCO optimization for legacy, virtualized, and next-generation applications across multiple on-premises sites and clouds.

Highly complex operational models today are the result of diverse and disjointed visibility and troubleshooting capabilities, with no correlation across different cloud service providers. There are multiple panes of glass to configure, manage, monitor, and operate these multicloud instances. And there are inconsistent segmentation capabilities today across hybrid instances that pose security, compliance and governance challenges.

Cisco Cloud ACI Extends ACI Capabilities from On-premises to Public Cloud


Cisco ACI delivers control and visibility based on application network policy. With the next phase, Cisco ACI extends this policy-driven automation from on-premises to public cloud instances.

Cisco Cloud ACI runs natively in public clouds and delivers the following key capabilities:

Automated and secure hybrid connectivity through unified management. Through a single pane of glass (ACI Multi-Site Orchestrator), users can configure inter-site connectivity, define policies, and monitor the health of network infrastructure across hybrid environments. Inter-site connectivity includes (i) An underlay network for IP reachability (IPsec VPN over the Internet, or through AWS Direct Connect*) and (ii) an overlay network between the on-premises and cloud sites that runs BGP EVPN as its control plane and uses VXLAN encapsulation and tunneling as its data plane.

Cisco Data Center, Cisco AWS Cloud, Cisco Study Materials, Cisco Learning, Cisco Guides

Enable consistent security posture, governance, and compliance through a common policy abstraction. Cisco ACI on AWS uses group-based network and security policy models.Cloud ACI translates ACI policies into cloud-native policy constructs. The logical network constructs of Cisco ACI (tenants, VRFs, endpoint groups (EPGs), and contracts etc) translate into AWS networking constructs (user accounts, Virtual Private Cloud (VPC), and security groups, plus security group rules and network access-control lists etc.). This enables consistent network segmentation, access control, and isolation across hybrid deployments.

Enable elasticity for resources across on-premises data center and public cloud. Enable secure workload mobility and preserve the application policies, network segmentation, and identity of the workload (IP mobility*).

Facilitate workload migration across hybrid environments. Enable secure workload mobility and preserve the application policies, network segmentation, and identity of the workload (IP mobility*).

Enable business continuity and disaster recovery. Allow organizations to maintain or quickly resume mission-critical applications using a back-up and recovery site in the public cloud.

What makes Cisco’s Cloud ACI different and relevant for you


Cloud ACI provides a common policy abstraction and consumes AWS public APIs to deliver policy consistency and segmentation. As such, Cloud ACI is not confined to bare-metal instances in AWS and does not require deployment of agents in cloud workloads to achieve segmentation.

With Cisco ACI, customers can carry all their network and security policies across data centers, colocations, and clouds. Cisco ACI automates cross-domain service chaining of application traffic across physical and virtual L4-L7 devices to scale, and seamlessly integrates bare-metal servers, virtual machines, and containers under a single policy framework.

Cisco Data Center, Cisco AWS Cloud, Cisco Study Materials, Cisco Learning, Cisco Guides

Cisco ACI also has the industry’s broadest tech-partner ecoysystem and integrates with a variety of solutions ranging from Cisco AppDynamics, CloudCenter to F5, ServiceNow, Splunk, SevOne, and Datadog. Customers can leverage widely adopted tools such as Terraform and Ansible to achieve end-to-end workflow-based automation. AWS customers can tap into the rich cross-silo insights through ACI integrations with AWS technologies like Amazon CloudWatch* and Amazon Simple Notification Service (Amazon SNS)* to fine tune the network for better throughput, latency, path selection, security and cost optimization.

Have ACI Anywhere with Cloud ACI on AWS


As the industry’s most deployed, open SDN platform, Cisco delivers advanced capabilities on AWS and simplifies multicloud deployments with Cisco Cloud ACI. With the Cloud ACI architecture, customers and analysts see the benefit of seamless layer-in policy consistency, operational simplicity and the flexibility to leverage services offered by public clouds.

“ESG Research validates that companies are increasingly adopting a hybrid cloud approach to deliver the best service for their customers. In fact, many are adopting a Multicloud policy” says Bob Laliberte, Practice Director and Senior Analyst with the Enterprise Strategy Group. “However, these distributed compute environments create significant management complexity. Cisco ACI Anywhere, and more specifically, Cloud ACI on AWS is helping to consolidate and simplify management across the on-premises data center and the popular AWS cloud environment, something that we expect will be well received by all market segments.”

Thursday, 11 April 2019

Simplifying Container Orchestration with Cisco Hybrid Solution for Kubernetes on AWS

For organizations that are adopting DevOps practices and modern cloud capabilities to accelerate innovation and gain competitive advantage, one of the biggest challenges is maintaining common and consistent environments through an application’s lifecycle from development through to deployment. Containers solved the application portability problem of packaging all the necessary dependencies into discrete images, and Kubernetes has emerged as the defacto standard for how those containers are orchestrated and deployed.

By adopting containers and Kubernetes, IT and Line of Business users can focus their efforts on developing applications, rather than infrastructure and ‘plumbing’. Because Kubernetes is available everywhere, one can choose the best place to run an application based on business needs. For some applications, the scale and reach of the public cloud, along with its huge number of services available, will be the determining factor. For others, data locality, security or other concerns dictate an on-premises deployment.

Current solutions can be complex, requiring organizations to work across either isolated or separate environments and forcing teams to “glue” all the parts together themselves, at the expense of time and money. This can result in less choice by forcing organizations to choose between on-premises and public clouds or being limited by “all-or-nothing” stacks.

To help our customers with this challenge, Cisco announced today our collaboration with AWS to create the Cisco Hybrid Solution for Kubernetes on AWS. The new solution combines Cisco, AWS and Open Source technologies to simplify complexity and eliminate challenges for customers turning to Kubernetes to enable deploying applications across on-premises and the AWS cloud in a secure, consistent manner. It provides a tested, validated and simple solution that delivers consistent Kubernetes clusters both on premises and in the cloud, leveraging the best attributes of each. This reduces the burden on different teams with respect to people, processes and skill sets, accelerating the application deployment cycle and resulting in faster innovation. Customers can extend on-premises capabilities and resources to AWS cloud as well as utilize services and resources from AWS cloud on-premises.

Solution Overview


The core component of the The Cisco Hybrid Solution for Kubernetes on AWS is a unique integration between Cisco Container Platform (CCP) and Amazon Elastic Container Service for Kubernetes (EKS) so that through the single CCP management UI, the customer can provision clusters both on-premises and on EKS in the cloud. CCP uses AWS IAM authentication to create the VPC, instructs EKS to create a new cluster, and then configures the worker nodes in that cluster.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

With Cisco Hybrid Solution for Kubernetes on AWS, customers use the CCP UI to launch Kubernetes clusters in Amazon AWS in addition to on-premises environments. They simply declare their Kubernetes cluster specification and reference the cisco managed operating system images for the worker node images to deploy clusters in either environment. AWS Identity and Access Management (IAM) is integrated as common authentication mechanism, so that the cluster administrator is free to apply the same role-based access control (RBAC) policies across both environments. Both environments are integrated with Amazon Elastic Container Registry (ECR), providing a secure, single repository for all the container images. A standard set of Open Source monitoring and logging tools based on Prometheus and ElasticSearch/FluentD/Kibana (EFK) stack is deployed to the clusters to provide consistent logging and metrics. Finally, Cisco’s site-to-site VPN solutions, such as CSR 1000v are leveraged to provide a range of secure connectivity options between the cloud-hosted and on-premises services.

Cisco offers a single point of contact for support across all the components of the solution (including AWS components – EKS, IAM and ECR) – as opposed to having to seek support for each component separately from different vendors.

Using Cisco Container Platform to Provision Kubernetes Clusters in Amazon EKS and on-premises


To see what this looks like in practice, lets walk through how the administrator would create an EKS cluster using the Cisco Container Platform (CCP) dashboard.

Provisioning an EKS clusters is as simple as a few button clicks. You first define AWS as your infrastructure provider. This includes a provider name, and AWS account credentials.

Note: The AWS account credentials specified here will be the AWS IAM identity that has privileges to manage the EKS cluster.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

Next, you specify basic information about your Amazon EKS cluster. This includes the AWS region you want to deploy the EKS cluster in, an optional IAM user or role that you want allowed additionally to manage the EKS cluster, a cluster name and the Kubernetes version for the cluster.

Finally, you configure information about the EKS worker nodes. This includes the instance types, machine image, number of worker nodes and public ssh keys.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

And that’s it! Behind the scenes, CCP uses the Amazon APIs to provision the following resources:

◈ A new VPC (including subnets, security groups, route tables, etc.) in your account in accordance with AWS best practices, with secure private and public subnets as recommended by Cisco for VPN interconnection
◈ A service role for EKS
◈ A node instance profile for the EKS worker nodes
◈ An EKS cluster
◈ An autoscaling group with EKS worker nodes
◈ A configMap on your cluster that allows the worker nodes to join the master

Once the cluster is deployed, you can download a pre-generated Kubernetes cluster config file ( ~/.kube/config) . CCP leverages the open source aws-iam-authenticator kubectl plugin that uses credentials from your local ~/.aws/credentials file to authenticate an AWS IAM user with the EKS cluster.

For on-premises Kubernetes clusters deployed and managed by CCP, the solution offers an integrated experience with Amazon Cloud. As part of the integration with AWS, you can now select the “enable AWS IAM” option, which will install the AWS IAM authenticator components in the newly created on-premises Kubernetes cluster. This allows you to use a single set of AWS IAM credentials to access Kubernetes clusters both on-premises as well as in EKS.

With clusters provisioned in cloud and on-premises environments, let’s take a deeper look at each of the AWS integrations in Cisco Hybrid Solution for Kubernetes on AWS.

Common IAM Identity for Authentication with a common RBAC policy for Authorization


CCP leverages the open source AWS IAM authenticator to enable a common AWS IAM user/role to authenticate with clusters in both cloud and on-premises environments. Once the user/role authenticates with the clusters, a configurable common RBAC policy defines the specific permissions that the user/role is authorized to perform within the respective clusters. As a result, you have to simply switch context using a common “kubectl” cli tool to access either environment.

By default, the AWS credentials specified at the time of Amazon EKS cluster creation are mapped to the Kubernetes ‘cluster-admin’ ClusterRole (the “system:managers” group ClusterRoleBinding). This IAM identity has administrative control of the EKS cluster. As noted before, you can optionally specify an additional AWS IAM role or IAM user as Amazon Resource Name (ARN). When you specify this, CCP:

1 )  Maps an additional associated role in the EKS cluster configMap as illustrated below:

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

2)  Adds the associated role to the kube config so that the AWS IAM authenticator can use that role to authenticate with the EKS cluster as shown below:

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

For the on-premises cluster, you can enable the AWS IAM integration to authenticate with the cluster using the same IAM identity. You do this by specifying the ARN of an AWS IAM user during the on-premises cluster creation process. CCP similarly maps this user to the Kubernetes ‘cluster-admin’ ClusterRole in the on-premises cluster’s configMap. It also updates the on-premises cluster’s kubeconfig which in-turn enables the AWS IAM authenticator client to authenticate with the on-premises cluster using the same IAM identity.

With IAM configured as described above, it is then possible to use a common RBAC policy applied to Kubernetes clusters either an EKS or on-premises to control access to resources.

Common Amazon Elastic Container Registry (ECR)


CCP integrates with ECR, providing a secure, single repository for all the container images.

For Amazon EKS worker nodes, CCP automatically provisions an instance-role that has permissions to read/write from an ECR repository.

Since on-premises nodes have no such role, an additional step is necessary – the credentials must be stored in a Kubernetes secret which is then referenced by the pod manifest (see below). A script such as the following will do that for you (replace the items in [] as appropriate).

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

This script fetches an authorization token from AWS and stores it in a Kubernetes secret which is read during the pod deployment. Note that it is necessary to periodically refresh this token. By default, the token expires after 12 hours.

After running the script above, you can deploy a kubernetes manifest via kubectl, specifying the relevant details of the ECR repository, as you normally would. The example pod manifest below demonstrates how the ECR repository used by an application is specified in the image property.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

To pull images from an ECR registry, it is necessary to provide credentials. This is described in Amazon’s ECR documentation. For a user running docker, i.t looks like this: (ecr:GetAuthorizationToken privileges are required), while Kubernetes will use the credentials stored in a Kubernetes secret as described earlier and specified in the “imagePullSecret” in the pod manifest.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Certifications

With CCP, you can deploy both your on-premises and Amazon EKS worker nodes with the same Kubernetes version and operating system.

At launch, you can deploy Kubernetes v1.10 with Ubuntu 18.04 worker nodes, using Cisco-provided images. You do not have to worry about Kubernetes and operating system version inconsistencies across siloed environments. Updates and security patches across the on-premises and AWS environment are handled seamlessly and provided via the CCP control plane software.

Common Monitoring and Logging


CCP provides integrated cluster monitoring via a Prometheus and EFK stack (ElasticSearch/FluentD/Kibana) that is deployed within each cluster deployed by CCP. Monitoring each cluster is in compliance with best practices that mandate separation of production data from development data and for keeping information local for GDPR. It also ensures that logs and metrics are not reliant upon a central service which could be unavailable. Cisco Services can help with log forwarding and central metrics collection as well as integration with customer’s own logging and metrics systems as desired.

Value-added Integrations for Connectivity, Security and Monitoring


Cisco’s extended cross-portfolio solutions provide a range of value-added solutions that can be leveraged from the AWS marketplace to complement the Cisco Hybrid Solution for Kubernetes on AWS.

These include:

◈ Application Deployment: Use Cisco CloudCenter to securely deploy both Kubernetes and VM-based workloads across both private and public infrastructure.

◈ Connectivity: Use Cisco CSR1000v to establish VPN connectivity between hybrid on-premises and cloud environments

◈ Security: Deploy Cisco Stealthwatch to monitor network traffic application traffic for anomalies, leveraging AWS flow logs for cloud-based workloads.

◈ Monitoring: Enable AppDynamics application performance monitoring to see the real-time impact that application performance has on your business results.