Saturday, 25 April 2020

Cisco Helps Competitive Carriers Deliver 5G Service Agility

Cisco Prep, Cisco Tutorial and Material, Cisco Certifications, Cisco 5G

5G promises revolutionary new consumer experiences and lucrative new business-to-business (B2B) services that were never possible before: wireless SD-WANs, private 5G networks, new edge computing cases, and many others. Actually delivering these groundbreaking services, however, will require much more than just new 5G radio technology at cell sites. It will take very different capabilities, and a different kind of network, then most service providers have in place today.

Ultimately, you need a “service-centric” network—one that provides the flexibility and control to build differentiated services, rapidly deliver them to customers, and manage them end-to-end—across both wireless and wireline domains. What does a service-centric network look like? And what’s the best way to get there from where you are today? Let’s take a closer look.

Building a Service-Centric Network


Viewing the media coverage around 5G, you might think the revolution begins and ends with updating the radio access network (RAN). But that’s just one piece of the puzzle. Next-generation services will take advantage of the improved bandwidth and density of 5G technology, but it’s not new radios, or even 5G packet cores, that make them possible. Rather, they’re enabled by the ability to create custom virtual networks tuned to the needs of the services running across them. That’s what a service-centric network is all about.

When you can tailor traffic handling end-to-end on a per-flow basis, you can deliver all manner of differentiated services over the same infrastructure. And, when you have the end-to-end automation that service-centric networks imply, you can do it much more efficiently. Those capabilities go much deeper than the radios at your cell sites. Sure, adding 5G radios will improve last-mile speeds for your customers. But if you’re not evolving your end-to-end infrastructure towards service-centric principles, you won’t be able to deliver net-new services—or tap new B2B revenue streams.

Today, Cisco is helping operators of all sizes navigate this journey. We’re providing essential 5G technologies to help service providers like T-Mobile transform their networks and services. (In fact, Cisco is providing the foundational technology for T-Mobile’s non-standalone and standalone 5G architecture strategy.) At the same time, we’re building on our legacy as the leader in IP networking to unlock new transport, traffic handling, and automation capabilities. At the highest level, this evolution entails:

1. Implementing next-generation IP-based traffic handling

2. Extending IP all the way to endpoints

3. Laying the foundation for end-to-end automation

Optimizing Traffic Management


As the first step in building a service-centric network, you should be looking to further the migration of all network connections to IP and, eventually, IPv6. This is critical because IP networks, combined with technologies such as MPLS, enable multi-service networks with differentiated traffic policies. Without advanced traffic management, you can’t provision, monitor, and assure next-generation services under service-level agreements (SLAs), which means you can’t tap into lucrative consumer and business service revenue opportunities.

Today, most operators manage traffic via MPLS. Although MPLS has been highly effective at enabling traffic differentiation, it has complexity issues that can impede the scale and automation of tomorrow’s networks. Fortunately, there’s another option: segment routing. Segment routing offers a much simpler way to control traffic handling and policy on IP networks. And, by allowing you to programmatically define the paths individual services take through the network, it enables much more efficient transport.

Many operators have deployed segment routing and are evolving their networks today. You can start now even in “brownfield” environments. Cisco is helping operators implement SR-MPLS in a way that coexists with current architectures, and even interoperates with standards-based legacy solutions from other vendors. Once that foundation is in place, it becomes much easier to migrate to full IPv6-based segment routing (SRv6) in the future.

Extending IP


As you are implementing segment routing, you should go one step further and extend these new service differentiation capabilities as close to the customer as possible. This is a natural progression of what operators have been doing for years: shifting almost all traffic to IP to deliver it more effectively.

Using segment routing in your backhaul rather than Layer-2 forwarding allows you to use uniform traffic management everywhere. Otherwise, you would have to do a policy translation every time a service touches the network. Now, everything uses segment routing end to end, instead of requiring different management approaches for different domains. You can uniformly differentiate traffic based on needs, applications, even security, and directly implement customer SLAs into network policy. All of a sudden, the effort required to manage services and integrate the RAN with the MPLS core is much simpler.

The other big benefit of moving away from Layer-2 forwarding: a huge RAN capacity boost. Layer-2 architectures must be loop-free, which means half the paths coming off a radio node—half your potential capacity—are always blocked. With segment routing, you can use all paths and immediately double your RAN bandwidth.

Building Automation


As you progress in building out your service-centric network, you’re going to be delivering many more services. And you’ll need to manage more diverse traffic flows with improved scale, speed, and efficiency. You can’t do that if you’re still relying on slow, error-prone manual processes to manage and assure services. You’ll need to automate.

Cisco is helping service providers of all sizes lay the foundation for end-to-end automation in existing multivendor networks. That doesn’t have to mean a massive technology overhaul either, with a massive price tag to go with it. You can take pragmatic steps towards automation that deliver immediate benefits while laying the groundwork for much simpler, faster, more cost-effective models in the future.

Get the Value You Expect from 5G Investments


The story around 5G isn’t fiction. This really is a profound industry change. It really will transform the services and revenue models you can bring to the market. But some things are just as true as they always were: You don’t generate revenues from new radio capabilities, you generate them from the services you can deliver across IP transport.

What’s new is your ability to use next-generation traffic handling to create services that are truly differentiated. That’s what the world’s largest service providers are building right now, and it’s where the rest of the industry needs to go if they want to compete and thrive.

Let Cisco help you build a service-centric network to capitalize on the 5G revolution and radically improve the efficiency, scalability, and total cost of ownership of your network.

Friday, 24 April 2020

Why Cisco ACI with HashiCorp Terraform really matters

Cisco ACI, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

Introduction


As organizations move to the cloud to prioritize application delivery, they find that they need to shift their approach to infrastructure from “static” to “dynamic”. Some of the challenges they face include having to work in multiple environments, with varying volume and distribution services. In many organizations, the cloud operating model is forcing  IT to shift from manual workflows to infrastructure as code (IAC) automation and switching processes from ticketing workflows for IT to workflows with self-service IT. In this blog, let us take a tour de force of how Cisco and Terraform have joined hands to address these challenges and help customers in their mission to gain business agility with infrastructure automation as their core strategy.

Terraform and Cisco ACI – A win-win joint solution for Infrastructure as Code deployments


IAC is an innovative approach to building application and software infrastructure with code, and customers deploying applications in cloud clearly are seeing the payoff. Though for many, full adoption of IAC is still elusive given the expertise required to navigate infrastructure complexity. And this is the ideal ground for Terraform to come to the rescue. Terraform brings software best practices such as versioning, test practices and many more to make it a powerful tool to create and destroy infrastructure components on the fly”. Terraform obviates the need for separate config. managers typically required in traditional IAC approaches and instead handles such tasks on its own behind the scene. Likewise, Cisco ACI a network platform built on SDN principles, enhances business agility, reduces TCO, automates IT tasks, and accelerates data center application deployments. Cisco ACI and Terraform provide a perfect combination enabling customers to embrace the DevOps model and accelerate ACI Deployment, Monitoring, day-to-day management, and more.

Cisco ACI – Terraform Solution architecture


Terraform manages both existing, popular services and custom in-house solutions, offering over 100 providers. With a vision to address some of the challenges listed earlier, especially in multi-cloud networking, Cisco and HashiCorp have worked together to deliver the ACI Provider for Terraform, using Terraform’s plugin extensibility. Cisco ACI Provider supports more than 90+ resources and datasources.

Terraform provides its users a simple workflow to install and get started with. With Terraform installed, let’s dive right into it and start creating some configuration intent on Cisco ACI. See diagram below for an illustration of the workflow steps.

Cisco ACI, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

Key Benefits of Terraform-ACI solution


Some of the key benefits Cisco ACI and HashiCorp Terraform brings are the following

1. Define infrastructure as code and manage the full lifecycle. Create new resources, manage existing ones, and destroy those no longer needed.

2. Terraform provides an elegant user experience for operators to safely and predictably make changes to infrastructure.

3. Terraform makes it easy to re-use configurations for similar infrastructure, helping avoid mistakes and save time.

Thursday, 23 April 2020

Automation, Learning, and Testing Made Easier With Cisco Modeling Labs Enterprise v2.0

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep

Cisco Modeling Labs – Enterprise v2.0 is here, sporting a complete rewrite of the software and a slew of cool, new features to better serve your education, network testing, and CI/CD automation needs. Version 2.0 still gives you a robust network simulation platform with a central store of defined and licensed Cisco IOS images, and now it also provides a streamlined HTML5 user interface with a lean backend that leaves more resources free to run your lab simulations.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
CML 2.0 Workbench  

This attention to streamlining and simplification extends to installation and getting started as well. You can install and configure Cisco Modeling Labs – Enterprise v2.0 in no time. And you’ll be building labs in as little as ten minutes.

As you use Cisco Modeling Labs to virtualize more and more network testing processes, topologies can grow quite large and complex. This can strain host resources such as memory and CPU. So after the nodes start, the Cisco Modeling Labs engine uses Linux Kernel same-page merging, or KSM to optimize the lab memory footprint. KSM essentially allows Cisco Modelings Labs to deduplicate the common memory blocks that each virtual node’s OS uses. The result? More free memory for labs.

API First

The HTML5 UI only scratches the surface of what’s new. Cisco Modeling Labs – Enterprise v2.0 is an “API first” application. Each of the operations performed in the UI – adding labs, adding nodes, positioning nodes on a topology canvas, creating links, starting up a simulation, and so forth – are all powered by a rich RESTful API. With this API, you can tie Cisco Modeling Labs into network automation workflows such as Infrastructure as Code pipelines, so you can test network configuration changes before deploying them in production.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
CML API In Action

To make it even easier to integrate Cisco Modeling Labs – Enterprise v2.0 into your NetDevOps toolchains, the software includes a Python client library to handle many of the lower-level tasks transparently, allowing you to focus on the fun bits of putting network simulation right into your workflows. For example, the client library already drives an Ansible module to automate lab creation and operation.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
The CML Python Client Library

Flexible Network and Service Integration


Sometimes your virtual lab needs to talk to physical devices in the “real” world. Cisco Modeling Labs – Enterprise v2.0 makes it simple to connect virtual topologies with external networks in either a layer 3 network address translation (NAT) mode or a layer 2 bridged mode. In bridged mode, the connect node shares the Virtual Network Interface Card (vNIC) of the Cisco Modeling Labs VM. So nodes can participate in routing protocols like OSPF, EIGRP, and multicast groups, with physical network elements and hosts. This lets you integrate external services and tools with your virtual labs. For example, an external network management application can monitor or configure your virtual nodes.

But you can also clone some of these services directly into your virtual labs. Cisco Modeling Labs – Enterprise v2.0 includes images for Ubuntu Linux, CoreOS, and an Alpine Linux desktop. With these, you can run network services, spin up Docker containers, and drive graphical UIs directly from Cisco Modeling Labs. Don’t want to use the web interface to access consoles and Virtual Network Computing (VNC)? Cisco Modeling Labs includes a “breakout tool” that maps ports on your local client to nodes within a lab. So you can use whatever terminal emulator or VNC client you want to connect to your nodes’ consoles and virtual monitors.

Wednesday, 22 April 2020

Cisco and Google Cloud Partner to Bridge Applications and Networks: Announcing Cisco SD-WAN Cloud Hub with Google Cloud

Today, Cisco and Google Cloud are announcing their intent to develop the industry’s first application-centric multicloud networking fabric. This automated solution will ensure that applications and enterprise networks will be able to share service-level agreement settings, security policy, and compliance data, to provide predictable application performance and consistent user experience.

Our partnership will support the many businesses that are embracing a hybrid and multicloud strategy to get the benefits of agility, scalability and flexibility. The platform will enable businesses to optimize application stacks by distributing application components to their best locations. For example, an application suite could support a front end running on one public cloud to optimize for cost, an analytics library on another cloud to leverage its AI/ML capabilities, and a financial component running on-prem for optimal security and compliance.

The connective fabric for these modern enterprise apps (and their distributed users) is the network. The network fabric needs to be able to discover the apps, identify their attributes, and adapt for optimal user experience. Likewise, applications need to respond to changing needs and sometimes enterprise-sized shifts in load, while maintaining availability, security and compliance, through the correlation of application, user and network insights.

In our newly-expanded partnership, Cisco and Google Cloud will build a close integration between Cisco SD-WAN solutions and Google Cloud. Network policies, such as segmentation, will follow network traffic across the boundary between the enterprise network and Google Cloud, for end-to-end control of security, performance, and quality of experience.

“Expansion of the Google-Cisco partnership represents a significant step forward for enterprises operating across hybrid and multi-cloud environments,” says Shailesh Shukla, Vice President of Products and General Manager, Networking at Google Cloud. “By linking Cisco’s SD-WAN with Google Cloud’s global network and Anthos, we can jointly provide customers a unique solution that automates, secures and optimizes the end to end network based on the application demands, simplifying hybrid deployments for enterprise organizations.”

With Cisco SD-WAN Cloud Hub with Google Cloud, for the first time, the industry will have full WAN application integration with cloud workloads: The network will talk to apps, and apps will talk to the network. Critical security, policy, and performance information will be able to cross the boundaries of network, cloud, and application. This integration will extend into hybrid and multicloud environments, like Anthos, Google Cloud’s open application platform, supporting the optimization of distributed, multicloud microservice-based applications.

Towards Stronger Multicloud Apps


Today, applications do not have a way to dynamically signal SLA requests to the underlying network.  With this new integration, applications will be able to dynamically request the required network resources, by publishing application data in Google Cloud Service Directory. The network will be able to use this data to provision itself for the appropriate SD-WAN policies.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Exam Prep

For example, a business-critical application that needs low latency would have that requirement in its Google Cloud Service Directory entry. The appropriate SD-WAN policies would then be applied on the network. Likewise, as Cisco’s SD-WAN controller, vManage, monitors network performance and service health metrics, it could intelligently direct user requests to the most optimal cloud service nodes.

Cisco SD-WAN can also proactively provide network insights to a distributed system, like Google Anthos, to ensure availability of applications. For example, based on network reachability metrics from Cisco SD-WAN, Anthos can make real-time decisions to divert traffic to regions with better network reachability.

With Cisco SD-WAN Cloud Hub for Google cloud, customers can extend the single point of orchestration and management for their SD-WAN network to include the underlay offered by Google Cloud backbone. Together with Cisco SD-WAN, Google Cloud’s reliable global network provides enterprise customers with operational efficiency and the agility to scale up for bandwidth.

This integration will promote better security and compliance for enterprises. Using Cisco Cloud Hub, policies can be extended into the cloud to authorize access to cloud applications based on user identity. With Cisco SD-WAN Cloud Hub, any device, including IoT devices, will be able to comply with enterprise policy across the network and app ecosystem.

The partnership to create the Cisco SD-WAN Cloud Hub with Google Cloud will lead to applications with greater availability, improved user experience, and more robust policy management.

A Partnership Solution for Tomorrow’s Networks


For enterprise customers who deploy applications in Google Cloud and multicloud environments, Cisco SD-WAN Cloud Hub with Google Cloud offers a faster, smarter, and more secure way to connect with them and consume them. The Cloud Hub will increase the availability of enterprise applications by intelligently discovering and implementing their SLAs through dynamically orchestrated SD-WAN policy. The platform will decrease risk and increase compliance, offering end-to-end policy-based access, and enhanced segmentation from users to applications in the hybrid multicloud environment.

Cisco and Google Cloud intend to invite select customers to participate in previews of this solution by the end of 2020. General availability is planned for the first half of 2021.

Tuesday, 21 April 2020

Keeping applications safe and secure in a time of remote work

Businesses around the world have quickly moved to a remote worker initiative, with more users accessing critical workloads outside the traditional workplace than ever before. New attack vectors are inadvertently being introduced to businesses as users are accessing their workloads outside the traditional 4-walls of the workplace and the security protection those 4 walls provide.

To combat the uncertainty and risks introduced by mobilizing a greater than normal remote workforce, it is critical that IT maintains visibility into the network and application behavior from the source of both the users’ remote access machine as well as the critical workloads they’re accessing in the data center, cloud or both (Figure 1). Additionally, it is critical for cybersecurity operators to be able to move to a whitelist/zero-trust segmentation model for network traffic they deem critical for the business to function and do so in a way that can be implemented in a matter of minutes.

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 1 – Example of hybrid deployment with remote access to critical workloads

Cisco Tetration and Cisco AnyConnect are paired together now to provide comprehensive security for workload protection needs that customers are having during these volatile times.  These technologies allow IT operators to mitigate many risks to their critical workloads introduced by having an increased attack surface at the “access” layer of their network and also enforce policies to secure the edge and the workloads.  Let’s take a look at the two most relevant use-cases:

Use Case 1 – Gain visibility to network and application behavior of the remote workforce and the workloads they’re accessing. Figure 2 shows exactly how AnyConnect and Tetration work together by sharing telemetry to provide granular level visibility:

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 2 – Tetration and AnyConnect Integration

Use-Case 2 –  Easily implement whitelist network policies that protect access to workloads themselves. Figure 3 demonstrates Tetration enforcing enterprise wide policies that affect the organization as a whole. Figure 4 shows Tetration enforcing policies based on application and workload behavior remain compliant. Having these policies across workloads running anywhere (on-prem, cloud or both) adds the needed protection that stretches beyond perimeter security. With workloads being remotely accessed, micro-segmentation prevents later movement of threats reducing surface attacks.

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 3 – Enterprise wide policies on Tetration

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 4 – Policies on workload based on workload behavior

Now let us dive into the critical elements that help you maintain full visibility and monitor your security policies as your environment evolves. Note all images below are demonstration derived from running Tetration.

1. Visibility is key—quickly see what applications are being accessed by remote users (Fig.5).

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 5 – Applications are access by remote users

2. Gain control— with deeper insights you have more power to take better IT decisions. Get an understanding of your workload data flow without the added overhead of manual interrogation (Fig.6). With the help of Tetration agent running on each workload, you also have the log on the processes that have been accessed (Fig7).

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 6 – Details flow data

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 7 – Get the accessed process using Tetration Agent

3. Search optimization—get granular search results using user details. Historically, this has been a challenge, but with this capability, it will save you time of deeper intervention (Fig.8). Go further by filtering allowed communication policies amongst workloads by searching AD groups (Fig. 9)

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 8 – Filter based on AD user

Cisco Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

Figure 9 – Filter allowed communications based on AD Group

Cisco Tetration and AnyConnect can help you ramp up your remote access goals securely.

Monday, 20 April 2020

Imagining a world with ubiquitous Remote Expertise – well at least in Banking

Cisco Prep, Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certification

We recently returned from Cisco Live where this year’s theme was IMAGINE. Innovation was everywhere, there was even an IMAGINE Lab. That theme ignited some memories for me about when Cisco initially came out with a solutions called Remote Expert. At that time, several years ago, it was the cutting edge of what we could do with video within the business world. I was taken by the impact this technology could have, and began to IMAGINE how someday this concept and technology could be used globally by society to help individuals or groups answer critical questions and solve big problems. That video would become pervasive as it has, but also there would be a way to curate expertise, where you could do a live video chat to research or find answers from those who have that specific knowledge. Well, we are on our way with Siri and Alexa, but not there yet. Someday!

The Remote Expert solution at Cisco has continued to evolve and has been a very effective approach for Retail Banking. In a bank the Remote Expert provides a fully immersive, virtual face-to-face interaction that replicates the “physical expert in the room” experience by connecting a pool of centralized experts based in a call center or another office with the customer sitting in a branch or brokerage. Why is this a need in banking? The bank branch is a critical aspect of the Omni channel and needs to provide the entire set of products and services offered. However, it’s definitely not cost effective to have an expert in all the services from personal and business accounts through mortgage and investments.

Having Remote Expert capability allows the bank to match the erratic customer demand for the expertise they need across an entire physical branch network without having an expert sitting within each branch. It allows the bank to increase availability of experts at peak times to augment what expertise currently sits in the branch. In addition, the bank can then offer their entire expansive product line in all branches at all times at a price the customer can afford. Having expertise physically located in each branch is totally cost prohibitive. The price of services without Remote Expert would be so high, no one could afford them.

In addition, as financial institution are burdened with more and more regulation, the risk of selling complex advisory products increases. The banks are looking for ways to meet regulatory requirements while reducing risk and lowering costs. Remote Expert will do both. An added benefit is how Remote Expert lowers the cost of training and supervising a dispersed physical sales force across a large geographical area. This technology allows the bank to keep everyone updated and trained using the existing video capability. Just another cost saving benefit.

Cisco, along with our partner Moderro are delivering Remote Expert solutions to bank branches, as well as in the form of a standalone unit. The autonomous unit allow the bank to go to the customer, extending reach to wherever there’s market need.

The Remote Expert solution allows the bank to:

◉ Maintain the customer intimacy and satisfaction while introducing new product lines to existing branches without deploying a physical sales force

◉ Reducing/replacing a physical sales force with a centralized sales force to optimize expert usage

◉ Deliver smaller footprint branches, with fewer staff members selling more complex products at a price that is acceptable to the bank

◉ Increase assisted self service capability in banking hall to reduce service staff

◉ Reduce sales distribution risk (the risk of miss-selling)

◉ Reduce costs

◉ Increase revenues

Sunday, 19 April 2020

Configuring Cisco ACI and Cisco UCS Deployments with Cisco Action Orchestrator

Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco ACI, Cisco Prep

DevNet Developer Advocates, Unite!


Cisco Action Orchestrator (CAO) has been getting some attention from the DevNet Developer Advocates recently. So, I’m going to jump in and do my part and write about how I used CAO to configure Cisco ACI and Cisco UCS.

It was kind of interesting how this plan came together. We were talking about a Multi-domain automation project/demo and were trying to determine how we should pull all the parts together. We all had different pieces: Stuart had SD-WAN;  Matt had Meraki;  I had UCS and ACI;  Kareem had DNAC; and Hank had the bits and pieces and what-nots (IP addresses, DNS, etc.)

I suggested Ansible, Matt and Kareem were thinking Python, Stuart proposed Postman Runner, and Hank threw CAO into the mix. All valid suggestions. CAO stood out though because we could still use the tools, we were familiar with and have them called from a CAO workflow. Or we could use the built-in capabilities of CAO like the “HTTP Request” activity in the “Web Service” Adapter.

Gentlemen Start Your Workflows


CAO was decided upon and we all got to work on our workflows. The individual workflows are the parts we needed to make “IT” happen. Whatever the “IT” was.

I had two pieces to work on:

1. Provision a UCS Server using a Service Profile
2. Create an ACI tenant with all the necessary elements for policies, networks, and an Application Profile

I knew a couple of things my workflows would need – an IP Address and a Tenant Name. Hank’s workflows would give me the IP Address and Matt’s workflows would give me the Tenant Name.

Information in hand I got to making my part of the “IT” happen. At the same time, the others, knowing what they needed to provide, got to making their part of the “IT” happen.

Cisco ACI – DRY (Don’t Repeat Yourself)


A pretty standard rule in programming is “don’t repeat yourself.” What does it mean? Simply put, if you are going to do something more than once, write a function!

I already wrote several Ansible Playbooks to provision a Cisco ACI Tenant and create an Application Profile, so I decided to use them. The workflow I created in CAO for ACI is just a call to the “Execute Linux/Unix SSH Script” activity in the “Unix/Linux System” Adapter.

For the Workflow I had to define a couple of CAO items.

◉ A target – In this case, the “target” is a Linux host where the Ansible Playbook is run.
◉ An Account Key – The “Account Key” is the credential that is used to connect to the “target”. I used a basic “username/password” credential.

The workflow is simple…

◉ Get the Tenant Name
◉ Connect to the Ansible control node
◉ Run the ansible-playbook command passing the Tenant Name as an argument.

Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco ACI, Cisco Prep

Could more be done here? Absolutely – error checking/handling, logging, additional variables, and so on. Lot’s more could be done, but I would do it as part of my Ansible deployment and let CAO take advantage of it.

Cisco UCS – Put Your SDK in the Cloud


The Cisco UCS API is massive – thousands and thousands of objects and a multitude of methods to query and configure those objects.  Once connected to a UCS Manager all interactions come down to two basic operations, Query and Configure.

I created an AWS Lambda function and made it accessible via an AWS API Gateway. The Lambda function accepts a JSON formatted payload that is divided into two sections, auth, and action.

The auth section is hostname, username, password and is used to connect to the Cisco UCS Manager

The action section is a listing of Cisco UCS Python SDK Modules, Classes, and Object properties. The Lambda function uses Python’s dynamic module importing capability to import modules as needed.

This payload connects to the specified UCS Manager and creates an Organization called Org01

{
        "auth": {
            "hostname": "ucsm.company.com",
            "username": "admin",
            "password": "password"
        },
        "action": {
            "method": "configure",
            "objects": [
            {
                    "module": "ucsmsdk.mometa.org.OrgOrg",
                    "class": "OrgOrg",
                    "properties":{
                        "parent_mo_or_dn": "org-root",
                        "name": "Org01"
                    },
                    "message": "created organization Org01"
                }
            ]
        }
    }

This payload connects to the specified UCS Manager and queries the Class ID computeRackUnit

  {
        "auth": {
            "hostname": "ucsm.company.com",
            "username": "admin",
            "password": "password"
        },
        "action": {
            "method": "query_classid",
            "class_id": "computeRackUnit"
        }
    }

To be honest there is a lot of DRY going on here as well. The Python code for this Lambda function is very similar to the code for the UCS Ansible Collection module, ucs_managed_objects.

In CAO the workflow uses the Tenant Name as input and a Payload is created to send to the UCS SDK in the Cloud. There are two parts to the UCS CAO workflows. One workflow creates the Payload and another workflow processes the Payload by sending it to the AWS API Gateway.  Creating a generic Payload processing workflow enables the processing of any payload that defines any UCS object configuration or query

For this workflow I needed to define a target in CAO

◉ Target – In this case the “target” is the AWS API Gateway that accepts the JSON Payload and sends it on to the AWS Lamba function.

The workflow is simple…

◉ Get the Tenant Name and an IP Address
◉ Create and process the Payload for a UCS Organization named with the Tenant Name
◉ Create and process the Payload for a UCS Service Profile named with the Tenant Name

Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco ACI, Cisco Prep

Pulling “IT” all together


Remember earlier I indicated that we all went off and created our “IT” separately using our familiar tools and incorporating some DRY. When the time came for us to put our pieces together it didn’t matter that I was using Ansible and Python and AWS, my workflows just needed a couple inputs. As long as those inputs were available, we were able to tie my workflows into a larger Multi-domain workflow.

The same was true for Stuart, Matt, Kareem and Hank. As long as we knew what input was needed and what output was needed, we could connect our individual pieces. The pieces of “IT” fit together nicely, and within less than forty minutes on a Webex call we had the main all-encompassing workflow created and running without error.