Showing posts with label Cisco ACI. Show all posts
Showing posts with label Cisco ACI. Show all posts

Tuesday, 7 March 2023

ACI Segmentation and Migrations made easier with Endpoint Security Groups (ESG)

Let’s open with a question: “How are you handling security and segmentation requirements in your Cisco Application Centric Infrastructure (ACI) fabric?”

I expect most answers will relate to constructs of Endpoint Groups (EPGs), contracts and filters.  These concepts are the foundations of ACI. But when it comes to any infrastructure capabilities, designs and customers’ requirements are constantly evolving, often leading to new segmentation challenges. That is why I would like to introduce a relatively recent, powerful option called Endpoint Security Groups (ESGs). Although ESGs were introduced in Cisco ACI a while back (version 5.0(1) released in May 2020), there is still ample opportunity to spread this functionality to a broader audience.

For those who have not explored the topic yet, ESGs offer an alternate way of handling segmentation with the added flexibility of decoupling this from the earlier concepts of forwarding and security associated with Endpoint Groups. This is to say that ESGs handle segmentation separately from the forwarding aspects, allowing more flexibility and possibility with each.

EPG and ESG – Highlights and Differences


The easiest way to manage endpoints with common security requirements is to put them into groups and control communication between them. In ACI, these groups have been traditionally represented by EPGs. Contracts that are attached to EPGs are used for controlling communication and other policies between groups with different postures. Although EPG has been primarily providing network security, it must be married to a single bridge domain. This is because EPGs define both forwarding policy and security segmentation simultaneously. This direct relationship between Bridge Domain (BD) and an EPG prevents the possibility of an EPG to span more than one bridge domain. This design requirement can be alleviated by ESGs. With ESGs, networking (i.e., forwarding policy) happens on the EPG/BD level, and security enforcement is moved to the ESG level.

Operationally, the ESG concept is similar to, and more straightforward than the original EPG approach. Just like EPGs, communication is allowed among any endpoints within the same group, but in the case of ESGs, this is independent of the subnet or BD they are associated with. For communication between different ESGs, we need contracts. That sounds familiar, doesn’t it? ESGs use the same contract constructs we have been using in ACI since inception.

So, what are the benefits of ESGs then? In a nutshell, where EPGs are bound to a single BD, ESGs allow you to define a security policy that spans across multiple BDs. This is to say you can group and apply policy to any number of endpoints across any number of BDs under a given VRF.  At the same time, ESGs decouple the forwarding policy, which allows you to do things like VRF route leaking in a much more simple and more intuitive manner.

ESG. A Simple Use Case Example


To give an example of where ESGs could be useful, consider a brownfield ACI deployment that has been in operation for years. Over time things tend to grow organically. You might find you have created more and more EPG/BD combinations but later realize that many of these EPGs actually share the same security profile. With EPGs, you would be deploying and consuming more contract resources to achieve what you want, plus potentially adding to your management burden with more objects to keep an eye on. With ESGs, you can now simply group all these brownfield EPGs and their endpoints and apply the common security policies only once. What is important is you can do this without changing anything having to do with IP addressing or BD settings they are using to communicate.

So how do I assign an endpoint to an ESG? You do this with a series of matching criteria. In the first release of ESGs, you were limited in the kinds of matching criteria. Starting from ACI 5.2(1), we have expanded matching criteria to provide more flexibility for endpoint classification and ease for the user. Among them: Tag Selectors (based on MAC, IP, VM tag, subnet), whole EPG Selectors, and IP Subnet Selectors. All the details about different selectors can be found here: https://www.cisco.com/c/en/us/td/docs/dcn/aci/apic/6x/security-configuration/cisco-apic-security-configuration-guide-60x/endpoint-security-groups-60x.html.

EPG to ESG Migration Simplified


In case where your infrastructure is diligently segmented with EPGs and contracts that reflect application tiers’ dependencies, ESGs are designed to allow you to migrate your policy with just a little effort.

The first question that most probably comes to your mind is how to achieve that? With the EPG Selector, one of the new methods of classifying endpoints into ESGs, we enable a seamless migration to the new grouping concept by inheriting contracts from the EPG level. This is an easy way to quickly move all your endpoints within one or more EPGs into your new ESGs.

For a better understanding, let’s evaluate the below example. See Figure 1. We have a simple two EPGs setup that we will migrate to ESGs. Currently, the communication between them is achieved with contract Ctr-1.

High-level migration steps are as follows:

1. Migrate EPG 1 to ESG 1
2. Migrate EPG 2 to ESG 2
3. Replace the existing contract with the one applied between newly created ESGs.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure-1- Two EPGs with the contract in place

The first step is to create a new ESG 1 where EPG 1 is matched using the EPG Selector. It means that all endpoints that belong to this EPG become part of a newly created ESG all at once. These endpoints still communicate with the other EPG(s) because of an automatic contract inheritance (Note: You cannot configure an explicit contract between ESG and EPG).

This state, depicted in Figure 2, is considered as an intermediate step of a migration, which the APIC reports with F3602 fault until you migrate outstanding EPG(s) and contracts. This fault is a way for us to encourage you to continue with a migration process so that all security configurations are maintained by ESGs. This will keep the configuration and design simple and maintainable. However, you do not have to do it all at once. You can progress according to your project schedule.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 2 – Interim migration step

As a next step, with EPG Selector, you migrate EPG 2 to ESG 2, respectively. Keep in mind that nothing stands in the way of placing other EPGs into the same ESG (even if these EPGs refer to different BDs). Communication between ESGs is still allowed with contract inheritance.

To complete the migration, as a final step, configure a new contract with the same filters as the original one – Ctr-1-1. Assign one ESG as a provider and the second as a consumer, which takes precedence over contract inheritance. Finally, remove the original Ctr-1 contract between EPG 1 and EPG 2. This step is shown in Figure 3.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 3 – Final setup with ESGs and new contract

Easy Migration to ACI


The previous example is mainly applicable when segmentation at the EPG level is already applied according to the application dependencies. However, not everyone may realize that ESG also simplifies brownfield migrations from existing environments to Cisco ACI.

A starting point for many new ACI customers is how EPG designs are implemented.  Typically, the most common choice is to implement such that one subnet is mapped to one BD and one EPG to reflect old VLAN-based segmentation designs (Figure 4). So far, moving from such a state to a more application-oriented approach where an application is broken up into tiers based on function has not been trivial. It has often been associated with the need to transfer some workloads between EPGs, or re-addressing servers/services, which typically leads to disruptions.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 4 – EPG = BD segmentation design

Introducing application-level segmentation in such a deployment model is challenging unless you use ESGs. So how do I make this migration from pure EPG to using ESG? With the new selectors available, you can start very broadly and then, when ready, begin to define additional detail and policy. It is a multi-stage process that still allows endpoints to communicate without disruption as we make the transition gracefully. In general, the steps of this process can be defined as follows:

1. Classify all endpoints into one “catch-all” ESG
2. Define new segmentation groups and seamlessly take out endpoints from “catch-all” ESG to newly created ESGs.
3. Continue until all endpoints are assigned to new security groups.

In the first step (Figure 5), you can enable free communication between EPGs, by classifying all of them using EPG selectors and putting them (temporarily) into one “catch-all” ESG. This is conceptually similar to any “permit-all” solutions you may have used prior to ESGs (e.g. vzAny, Preferred Groups).

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 5 – All EPGs are temporarily put into one ESG

In the second step (Figure 6), you can begin to shape and refine your security policy by seamlessly taking out endpoints from the catch-all ESG and putting them into other newly created ESGs that meet your security policy and desired outcome. For that, you can use other endpoint selector methods available – in this example – tag selectors. Keep in mind that there is no need to change any networking constructs related to these endpoints. VLAN binding to interfaces with EPGs remains the same. No need for re-addressing or moving between BDs or EPGs.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 6 – Gradual migration from an existing network to Cisco ACI

As you continue to refine your security policies, you will end up in a state where all of your endpoints are now using the ESG model. As your data center fabric grows, you do not have to spend any time worrying about which EPG or which BD subnet is needed because ESG frees you of that tight coupling. In addition, you will gain detailed visibility into endpoints that are part of an ESG that represent a department (like IT or Sales in the above example) or application suite. This makes management, auditing, and other operational aspects easier.

Intuitive route-leaking


It is well understood that getting Cisco ACI to interconnect two VRFs in the same or different tenants is possible without any external router. However, two additional aspects must be ensured for this type of communication to happen. First is regular routing reachability and the second is security permission.

In this very blog, I stated that ESG decouples forwarding from security policy. This is also clearly visible when you need to configure inter-VRF connectivity. Refer to Figure 7 for high-level, intuitive configuration steps.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 7. Simplified route-leaking configuration. Only one direction is shown for better readability

At the VRF level, configure the subnet to be leaked and its destined VRF to establish routing reachability. A leaked subnet must be equal to or be a subset of a BD subnet. Next attach a contract between the ESGs in different VRFs to allow desired communication to happen. Finally, you can put aside the need to configure subnets under the provider EPG (instead of under the BD only), and make adjustments to define the correct BD scope. These are not required anymore. The end result is a much easier way to set up route leaking with none of the sometimes confusing and cumbersome steps that were necessary using the traditional EPG approach.

Source: cisco.com

Sunday, 29 May 2022

Enabling Scalable Group Policy with TrustSec Across Networks to Provide More Reliability and Determinism

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation

Cisco TrustSec provides software-defined access control and network segmentation to help organizations enforce business-driven intent and streamline policy management across network domains. It forms the foundation of Cisco Software-Defined Access (SD-Access) by providing a policy enforcement plane based on Security Group Tag (SGT) assignments and dynamic provisioning of the security group access control list (SGACL).

Cisco TrustSec has now been enhanced by Cisco engineers with a broader, cross-domain transport option for network policies. It relies on HTTPS, Representational State Transfer (REST) protocol API, and the JSON file and data interchange format for far more reliable and scalable policy updates and segmentation for more deterministic networks. It is a superior choice over the current use of RADIUS over User Datagram Protocol (UDP), which is notorious for packet drops and retries that degrade performance and service guarantees.

Scaling Policy

Cisco SD-Access, Cisco SD-WAN, and Cisco Application Centric Infrastructure (ACI) have been integrated to provide enterprise customers with a consistent cross-domain business policy experience. This necessitated a more robust, reliable, deterministic, and dependable TrustSec infrastructure to meet the increasing scale of SGTs and SGACL policies―combined with high-performance requirements and seamless policy provisioning and updates followed by assured enforcement.

With increased scale, two things are required of policy systems. 

◉ A more reliable SGACL provisioning mechanism. The use of RADIUS/UDP transport is inefficient for the transport of large volumes of data. It often results in a higher number of round-trip retries due to dropped packets and longer transport times between devices and the Cisco Identity Services Engine (ISE server). The approach is error-prone and verbose.

◉ Determinism for policy updates. TrustSec uses the RADIUS change of authorization (CoA) mechanism to dynamically notify changes to SGACL policy and environmental data (Env-Data). Devices respond with a request to ISE to update the specified change. These are two seemingly disparate but related transaction flows with the common intent to deliver the latest policy data to the devices. In scenarios where there are many devices or a high volume of updates, there is a higher risk of packet loss and out-of-ordering, it is often challenging to correlate the success or failure of such administrative changes.

More Performant, Scalable, and Secure Transport for Policy 

The new transport option for Cisco TrustSec is based on a system of central administration and distributed policy enforcement, with Cisco DNA Center, Cisco Meraki Enterprise Cloud, or Cisco vManage used as a controller dashboard and Cisco ISE serving as the service point for network devices to source SGACL policies and Env-Data (Figure 1).  

Figure 1 shows the Cisco SD-Access deployment architecture depicting a mix of both old and newer software versions and policy transport options. 

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation
Figure 1. Cisco SD-Access Deployment Architecture with Policy Download Options

Cisco introduced JSON-based HTTP download for policies to ensure 100% delivery with no packet drops and no retries necessary. It improves the scale, performance, and reliability of policy workflows. Using TLS is also more secure than RADIUS/UDP transport. 

The introduction of the REST API for TrustSec data download is an additional protocol option on devices used to interface with Cisco ISE. Based on the system configuration, either of the transport mechanisms can be used to download environment data (Env-Data) and SGACL policies from Cisco ISE.  

Change of authorization (CoA) is an important functionality on the server to notify updates to network devices.  Cisco ISE continues to use RADIUS CoA, a lightweight message to notify updates to SGACL and Env-Data. In scenarios where there are a high number of devices or a high volume of updates, ISE may experience high CPU utilization due to high volume of CoA requests triggering equal number of CoA responses and follow-up requests from devices eager to update policies. But with the transition of SGACL and Env-data download to the REST protocol, reducing compute and transport time, it indirectly provides better CoA performance.  

In addition to improved reliability and deterministic policy updates, the REST transport interface has also paved the way for better platform assurance and operational visibility. 

The new policy enforcement plane available with Cisco TrustSec provides a broader, cross-domain transport option for network policies. It’s both a more reliable SGACL provisioning mechanism for larger volumes of data and a more deterministic solution for policy updates. The result is more scalable enforcement of business-driven intent and policy management across network domains.

Source: cisco.com

Tuesday, 1 February 2022

Application-centric Security Management for Nexus Dashboard Orchestrator (NDO)

Cisco Nexus Dashboard Orchestrator (NDO), Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides

Nexus Dashboard Orchestrator (NDO) users can achieve policy-driven Application-centric Security Management (ASM) with AlgoSec

Cisco Nexus Dashboard Orchestrator (NDO), Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides
AlgoSec ASM A32 is AlgoSec’s latest release to feature a major technology integration, built upon a well-established collaboration with Cisco — bringing this partnership to the front of the Cisco innovation cycle with support for Cisco Nexus Dashboard Orchestrator (NDO) allows Cisco ACI – and legacy-style Data Center Network Management – to operate at scale in a global context, across data center and cloud regions. The AlgoSec solution with NDO brings the power of intelligent automation and software-defined security features for ACI, including planning, change management, and micro-segmentation, to global scope. There are multiple use cases, enabling application-centric operation and micro-segmentation, and delivering integrated security operations workflows. AlgoSec now brings support for EPG and Inter-Site Contracts with NDO, boosting their existing ACI integration.

Let’s Change the World by Intent

Since its 2014 introduction, Cisco ACI has changed the landscape of data center networking by introducing an intent-based approach, over earlier configuration-centric architecture models. This opened the way for accelerated movement by enterprise data centers to meet their requirements for internal cloud deployments, new DevOps and serverless application models, and the extension of these to public clouds for hybrid operation – all within a single networking technology that uses familiar switching elements. Two new, software-defined artifacts make this possible in ACI: End-Point Groups (EPG) and Contracts – individual rules that define characteristics and behavior for an allowed network connection.

ACI Is Great, NDO Is Global

That’s really where NDO comes into the picture. By now, we have an ACI-driven data center networking infrastructure, with management redundancy for the availability of applications and preserving their intent characteristics. Using an infrastructure built on EPGs and contracts, we can reach from the mobile and desktop to the datacenter and the cloud. This means our next barrier is the sharing of intent-based objects and management operations, beyond the confines of a single data center. We want to do this without clustering types, that depend on the availability risk of individual controllers, and hit other limits for availability and oversight.

Instead of labor-intensive and error-prone duplication of data center networks and security in different regions, and for different zones of cloud operation, NDO introduces “stretched” EPGs, and inter-site contracts, for application-centric and intent-based, secure traffic which is agnostic to global topologies – wherever your users and applications need to be.

Having added NDO capability to the formidable, shared platform of AlgoSec and Cisco ACI, region-wide and global policy operations can be executed in confidence with intelligent automation. AlgoSec makes it possible to plan for operations of the Cisco NDO scope of connected fabrics to be application-centric and enables unlocking the ACI super-powers for micro-segmentation. This enables a shared model between networking and security teams for zero-trust and defense-in-depth, with accelerated, global-scope, secure application changes at the speed of business demand — within minutes, rather than days or weeks.

Key Use Cases

Change management — For security policy change management this means that workloads may be securely re-located from on-premises to public cloud, under a single and uniform network model and change-management framework — ensuring consistency across multiple clouds and hybrid environments.

Visibility — With an NDO-enabled ACI networking infrastructure and AlgoSec’s ASM, all connectivity can be visualized at multiple levels of detail, across an entire multi-vendor, multi-cloud network. This means that individual security risks can be directly correlated to the assets that are impacted, and a full understanding of the impact by security controls on an application’s availability.

Risk and Compliance — It’s possible across all the NDO connected fabrics to identify risk on-premises and through the connected ACI cloud networks, including additional cloud-provider security controls. The AlgoSec solution makes this a self-documenting system for NDO, with detailed reporting and an audit trail of network security changes, related to original business and application requests. This means that you can generate automated compliance reports, supporting a wide range of global regulations, and your own, self-tailored policies.

The Road Ahead

Cisco NDO is a major technology innovation and AlgoSec and Cisco are delighted and enthusiastic about our early adoption customers. Based on early reports with our Cisco partners, needs will arise for more automation, which would include the “zero-touch” push for policy changes – committing EPG and Inter-site Contract changes to the orchestrator, as we currently do for ACI and APIC. Feedback will also shape a need for automation playbooks and workflows that are most useful in the NDO context, and that we can realize with a full committable policy by the ASM Firewall Analyzer.

Source: cisco.com

Tuesday, 5 October 2021

Using Infrastructure as Code to deploy F5 Application Delivery and Cisco ACI Service Chaining

Every data center is built to host applications and provide the required infrastructure for the applications to run, communicate with each other, be accessed by their users from anywhere, and scale on demand.

To achieve this, your data center network must be able to provide different types of connectivity to different applications. This includes east-west connectivity between application tiers, as well as north-south connectivity between users and applications. Both rely on additional application delivery Layer 4 to Layer 7 services like load balancers and web application firewalls.

Cisco ACI and F5 BIG-IP Service Insertion

Cisco ACI’s powerful L4-L7 services redirection capabilities will allow you to insert services and redirect traffic from the source to the destination anywhere in your fabric without needing to change any of the existing cabling. This is where you can insert F5 BIG-IP load balancer, to provide application availability, access control, and security.

Read More: 500-440: Designing Cisco Unified Contact Center Enterprise (UCCED)

This is possible using the Policy Based Redirection (PBR) capabilities of the Cisco ACI fabric by configuring a Service Graph in APIC.

But PBR policy and Service Graphs entail a series of manual configurations. This can be tedious, error prone, and inefficient especially if the same configuration happens very often. On top of that, the configuration of the BIG-IP service itself requires information from the Cisco ACI Service Graph.

Simplified Service Insertion with Cisco and F5

This is why Cisco partnered with F5—a leader in the application delivery and web application firewall space around the Cisco ACI and the F5 BIGIP solutions—to simplify the deployment of F5-powered L4-L7 services using the F5 ACI ServiceCenter App for APIC.


This integration simplifies management of Virtual sever configuration on F5 BIG-IP and Service Graph configuration on Cisco ACI by providing a simple user-friendly UI.

In this blog, we will discuss an evolution of this integration for customers looking at Infrastructure as Code as the means to automatically deploy both Cisco ACI network infrastructure configuration and BIG-IP L4-L7 services for their applications and looking for opportunities to start progressing in their IaC journey.

End-to-End Service Insertion Automation with Infrastructure as Code


As a reminder, Infrastructure as Code is a journey that you can embark at different stages depending on your existing automation knowledge and needs. The goal of this journey is to translate manual tasks into reusable, robust distributable code and apply software development techniques such as version control (git), automated testing and CI/CD to achieve those goals.


The first step in an Infrastructure as Code journey is to start by selecting a language or a toolset to express our intent for our Infrastructure as actual code. For this integration, we decided to join forces with HashiCorp, the leader in infrastructure automation and a shared partner of Cisco and F5 and chose HashiCorp Terraform as the Infrastructure provisioning tool and using HCL (HashiCorp Configuration Language) to define service configuration as our code.

F5 and Cisco both have verified HashiCorp Terraform providers, making it easy to create the needed configuration on both sides using HCL (HashiCorp Configuration Language) as our code.

To further simplify automation of the numerous configuration items, Cisco and F5 have worked together on a set of Terraform modules which provide best practices defaults for most of the configuration items and allow users to override specific items of the configuration.

By providing a single workflow, all the dependencies are taken care of, and the usage of the overall solution is simplified. Modules also defines outputs that can be passed from one module to the next and modules can depend on each other to represent the dependency relationship they have with each other.

As part of this solution, a simple workflow with 3 Terraform modules has been created:


◉ The Cisco ACI Service Graph Terraform module allow the user to create and deploy a complete service graph for Policy-Based Redirection (PBR) with the required bridge domains and other necessary constructs as documented in the Cisco ACI Policy-Based Redirect Service Graph Design white paper

◉ The F5 BIG IP VLAN Self IP Terraform module configures the interfaces of the BIG-IP (physical or virtual) facing the ACI fabric with the correct VLANs, and Self-Ips configuration.

◉ The F5 BIG IP AS3 HTTP Service Terraform module configures an HTTP Service using F5 Application Services 3 extension (AS3) to provide a load balancing function with a specific Virtual server (VIP) and the recommended configuration when used in conjunction with Cisco ACI PBR.

Instantiation of the modules allows the user to pass the parameters necessary and use default parameters for the rest of the configuration hiding all their internal complexity to the user. The following is an example of the instantiation of the different modules and their dependencies:

module "cisco-aci-service-graph" {
    source = "./modules/service-graph-lb-pbr"
    tenant              = var.aci_tenant
    vmm_provider_dn     = var.aci_vmm_provider_dn
    vmm_domain_name     = var.aci_vmm_domain_name
    vmm_controller_name = var.aci_vmm_controller_name
    vm_name             = var.aci_bigip_vm_name
    vnic                = var.aci_bigip_vnic
    device_name         = var.aci_bigip_device_name
    device_mac_address  = var.aci_bigip_provider_mac
    device_ip_address   = var.selfip_int
    provider_bd_subnets         = var.aci_provider_bd_subnets
    consumer_bd_subnets         = var.aci_consumer_bd_subnets
    provider_service_bd_subnets = var.aci_provider_service_bd_subnets
    consumer_service_bd_subnets = var.aci_consumer_service_bd_subnets
}

module "bigip_vlan_selfip" {
    source       = "./modules/vlan_selfip"
    vlan_int_tag = replace(module. cisco-aci-service-graph.internal_vlan, "vlan-", "")
    vlan_ext_tag = replace(module. cisco-aci-service-graph.external_vlan, "vlan-", "")
    selfip_int   = var.selfip_int
    selfip_ext   = var.selfip_ext
}

module "as3_http_app" {
    source      = "./modules/as3http"
    server1     = var.server1
    server2     = var.server2
    vip_address = var.vip_address
    snat        = var.snat
}

You can see that the “bigip_vlan_selfip” module uses the output of the cisco-aci-service-graph module to pass the VLAN automatically derived from the ACI VMM domain integration. This removes the need to statically define a VLAN and allow the reuse of this plan over and over. You can also see that the module definition uses a lot of variables creating a reusable piece of code that can be instantiated multiple times with different sets of variables.

With this joint solution, deploying BIG-IP application services on an ACI network infrastructure with a Terraform workflow and applying Infrastructure as Code principles, can greatly simplify, automate, optimizes, and accelerate the entire application deployment lifecycle in turn improving time to value.

To better collaborate with other members of your organization on provisioning this solution, HashiCorp Terraform Cloud can be used to provide remote state storage allowing your state file (which provides a system of record for what you have provisioned) to be stored securely and remotely.

Saturday, 11 September 2021

Introducing Success Track for Data Center Network

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Career, Cisco Data Network

The pace of digital transformation has accelerated. In the past year and a half, business models have changed, and there is a need for enterprises to quickly improve the time taken to launch new products or services. The economic uncertainty has forced enterprises to reduce their overall spending and focus more on improving their operational efficiencies. As the pandemic has changed the way the world does business, organizations have been quick to pivot to digital mediums, as their survival depends on it. The results of a Gartner survey published in November 2020, highlights this rapid shift. 76% of CIOs reported increased demand for new digital products and services and 83% expected that to increase further in 2021, according to the Gartner CIO Agenda 2021.

Today, more than ever, IT operations are being asked to manage complex IT infrastructure. This when coupled with rising volumes of data, makes the task of IT teams more difficult to manage today’s dynamic, constantly changing data center environments. Automation is clearly the need of the hour and automation enabled by AI will play a huge role. Hyperscalers are leading the way in using AI for IT operations and are increasingly setting the trend that will see AI being embedded in every component of IT. Powered by AI, hyperscalers are quickly defining the future of IT– from self-healing infrastructure to databases that can recover quickly in the event of a failure or networks that can automatically configure and re-configure without any human intervention.

Cisco Application Centric Infrastructure (ACI) is a software-defined networking (SDN) solution designed for data centers. Cisco ACI allows network infrastructure to be defined based upon network policies and facilitates automated network provisioning – simplifying, optimizing, and accelerating the application deployment lifecycle.

To minimize the effort of managing your data center networks, Cisco has created Success Track for Data Center Network, a new innovative service offering. We want to help you simplify and remove roadblocks. Success Tracks provides coaching and insights at every step of your lifecycle journey.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Career, Cisco Data Network

Success Track for Data Center Network provides a one stop digital platform called CX Cloud.

The CX Cloud is your digital connection to access Cisco specialists and customized resources to help you simplify solution adoption and resolve issues faster. This is a new way of engaging with us, bringing together connected services—expertise, insights, learning, and support all in one place with a personalized, use-case driven solutions approach.

CX Cloud gives you contextual guidance for three data center networking (ACI) use cases: Network provisioning and operations, Network automation and programmability and Distributed networking.

The number one issue that we have heard about is that most next generation data centers (based on SDN) are complex and difficult to deploy. All three Success Track use cases will help simplify your network management and operations so you can serve the business more efficiently.

Take network provisioning and operations for example. A box by box, element by element management approach does not scale nor give the confidence for consistency required in today’s fast moving IT organizations.

By using the embedded tools built into the Application Policy Infrastructure Controller (or APIC) we demonstrate how to get a single point of automation, orchestration, and troubleshooting that will simplify data center network management and operations for Days 0, 1, and beyond.

To achieve these benefits, it is critical that a strong foundation be built on such a simple management infrastructure such as ACI’s single pane of glass tools.

Success Track for Data Center Network is a suite of use case guided service solutions designed to help you realize the full value of your ACI deployment, faster. This holistic service digitally connects you through CX Cloud to the right expertise, learning and insights at the right time to accelerate success.

Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Career, Cisco Data Network
Get access to experts, on-demand learning, Cisco community, and product documentation on our CX Cloud Portal.

Customers can simplify data center network deployment and operations through access to experts, embedded tools, and a unified digital platform. This results in greater efficiencies, cost savings and a reduction of errors.

If you are looking for a consistent onboarding of network infrastructure, expedited workload provisioning to network fabric, and improved monitoring and insights, Cisco Success Track for Data Center Network will help you get there.

Saturday, 14 August 2021

How To Simplify Cisco ACI Management with Smartsheet

Cisco ACI Management, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Learning

Have you ever gotten lost in the APIC GUI while trying to configure a feature? Or maybe you are tired of going over the same steps again and again when changing an ACI filter or a contract? Or maybe you have always asked yourself how you can integrate APIC with other systems such as an IT ticketing or monitoring system to improve workflows and making your ACI fabric management life easier. Whatever the case may be, if you are interested in finding out how to create your own GUI for ACI, streamline and simplify APIC GUI configuration steps using smartsheets, and see how extensible and programmable an ACI fabric is, then read on.

Innovations that came with ACI

I have always been a fan of Cisco ACI (Application Centric Infrastructure). Coming from a routing and switching background, my mind was blown when I started learning about ACI. The SDN implementation for data centers from Cisco, ACI, took almost everything I thought I knew about networking and threw it out the window. I was in awe at the innovations that came with ACI: OpFlex, declarative control, End-Point Groups (EPGs), application policies, fabric auto discovery, and so many more.

The holy grail of networking

It felt to me like a natural evolution of classical networking from VLANs and mapped layer-3 subnets into bridge domains and subnets and VRFs. It took a bit of time to wrap my head around these concepts and building underlays and overlays but once you understand how all these technologies come together it almost feels like magic. The holy grail of networking is at this point within reach: centrally defining a set of generic rules and policies and letting the network do all the magic and enforce those policies all throughout the fabric at all times no matter where and how the clients and end points are connecting to the fabric. This is the premise that ACI was built on.

Automating common ACI management activities

So you can imagine when my colleague, Jason Davis (@snmpguy) came up with a proposal to migrate several ACI use cases from Action Orchestrator to full blown Python code I was up for the challenge. Jason and several AO folks have worked closely with Cisco customers to automate and simplify common ACI management workflows. We decided to focus on eight use cases for the first release of our application:

◉ Deploy an application

◉ Create static path bindings

◉ Configure filters

◉ Configure contracts

◉ Associate EPGs to contracts

◉ Configure policy groups

◉ Configure switch and interface profiles

◉ Associate interfaces to policy groups

Using the online smartsheet REST API

You might recognize these as being common ACI fabric management activities that a data center administrator would perform day in and day out. As the main user interface for gathering data we decided to use online smartsheets. Similar to ACI APIC, the online smartsheet platform provides an extensive REST API interface that is just ripe for integrations.

The plan was pretty straight forward:

1. Use smartsheets with a bit of JavaScript and CSS as the front-end components of our application

2. Develop a Python back end that would listen for smartsheet webhooks triggered whenever there are saved Smartsheet changes

3. Process this input data based on this data create, and trigger Ansible playbooks that would perform the configuration changes corresponding to each use case

4. Provide a pass/fail status back to the user.

Cisco ACI Management, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
The “ACI Provisioning Start Point” screen allows the ACI administrator to select the
Site or APIC controller that needs to be configured.

Cisco ACI Management, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Once the APIC controller is selected, a drop down menu displays a list of all the use
cases supported. Select to which tenant the configuration changes will be applied,
and fill out the ACI configuration information in the smartsheet.

Cisco ACI Management, Cisco Exam Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Learning
Selecting the checkbox for Ready to Deploy, and saving the smartsheet, will trigger a webhook event that will be intercepted by the backend code and the Ansible configuration playbook will be run.

A big advantage to using Smartsheets compared to the ACI APIC GUI is that several configuration changes can be performed in parallel. In this example, several static path bindings are created at the same time.

Find the details on DevNet Automation Exchange



You can also find hundreds of similar use case examples in the DevNet Automation Exchange covering all Cisco technologies and verticals and all difficulty levels.

Drop me a message in the comments section if you have any questions or suggestions about this automation exchange use case.

Source: cisco.com

Tuesday, 22 June 2021

Power of Cloud Application Centric Infrastructure (Cloud ACI) in Service Chaining

It is a reality that most enterprise customers are moving from a private data center model to a hybrid multi-cloud model. They are either moving some of their existing applications or developing newer applications in a cloud native way to deploy in the public clouds. Customers are wary about sticking to just a single public cloud provider for fear of vendor lock-in. Hence, we are seeing a very high percentage of customers adopting a multi cloud strategy. According to Flexera 2021 State of the cloud report, this number stands at 92%. While a multi cloud model gives customers flexibility, better disaster recovery and helps with compliance, it also comes with a number of challenges. Customers have to learn not just one, but all of the different public cloud nuances and implementations.

More Info: 352-001: CCDE Design Written Exam (CCDE)

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Career, Cisco Exam Prep

Navigating the different islands of public cloud


When customers adopt a multi cloud strategy, they often begin with one and then expand to other clouds. Though most public clouds were built with an over-arching goal  of providing access to resources instantly at a lower cost, their individual implementations and corresponding cloud native constructs are different. Hence automation artifacts built for a specific public cloud provider, cannot be re-used for other clouds.  As we see our customers undertake the multi cloud journey, it is increasingly clear that having an automated way to configure the cloud constructs for various clouds is a huge benefit for our customers.

Cisco provides this solution to our customers via Cloud ACI. Cisco Application Centric Infrastructure (ACI) is Cisco’s premier Software Defined Networking (SDN) solution for the data center.  The ACI solution now caters not only to on-premises data center, but the public cloud as well. Thereby, offering a seamless experience to customers to orchestrate and manage consistent policies for their workloads irrespective of where the workload resides. Cloud ACI provides that needed abstraction across multiple public clouds, providing a single policy model for customers to define their intent. Cisco ACI solution takes care of automating the user intent into required cloud native construct of each cloud.

Cloud ACI solution achieves this by deploying the Cisco Cloud Application Infrastructure Policy Controller (Cloud APIC)  in the cloud site, like Amazon AWS or Microsoft Azure. The cloud APIC is registered with the Cisco Nexus Dashboard Orchestrator (formerly Multi-Site Orchestrator) – the master controller for managing different ACI sites. The user defines the policies on the Nexus Dashboard Orchestrator, which pushes it down to the sites where the user policy needs to be applied.The Cloud ACI controller at the site takes care of configuring the right networking and security cloud constructs for that cloud site.

Let us take an example of an enterprise that plans to deploy workloads both in AWS and Azure. Resources in AWS are deployed within a VPC, whereas Azure requires a Resource Group. AWS provides native load balancing services via Elastic Load Balancers, whereas in Azure, you would use an Application Gateway for L7 load balancing and Network Load Balancer for L4 traffic. The native cloud constructs are different and end users have to learn both AWS as well as Azure languages. If the enterprise uses Cloud ACI, configuring a VRF (Virtual Routing context) from the Nexus Dashboard Orchestrator will translate to creating a VPC in a AWS site and a Virtual Network (VNET) in the Azure site. It’s that simple!!!

Load Balancers and More!


Cloud ACI can be particularly powerful when automating your applications behind native load balancing services. Both large web scale applications as well as  smaller enterprise applications are typically deployed behind a load balancer for high availability and elasticity. Hence, all major public cloud players offer load balancing as a native service. Load balancers have a frontend, which is the IP and port to reach the application and a backend with the servers serving that application. Depending on the load, the servers hosting the application can be scaled up/down elastically.

Cloud ACI provides a neat way to automate the creation of the native load balancers as well as configure and manage the lifecycle of the load balancers. The solution provides an innovative way to add the backend servers as targets to the load balancers dynamically. This is done via tagging the servers and creating a service graph in ACI. A service graph represents the flow of data between consumers and providers via one or more service devices. Cloud ACI provides the ability to create load balancers and configures the frontend port based on user configuration. Once a user specifies via a contract the desired provider endpoint group (EPG), the solution takes care of automatically adding the servers that belong to the provider endpoint group as the backend of the load balancer.

This is pretty powerful, with VMs scaling up and down, there is no need to manually add/remove these servers from the load balancer backend. Cloud APIC auto detects the servers and classifies them into the right EPG.  The Cloud APIC then dynamically adds/removes these servers from the backend of the load balancer.

Unleash the power of service chaining


For web applications reachable over the internet, it is paramount that there is additional security built in to protect the application and the backend servers from security attacks. In such cases, it is common for customers to insert a firewall before the traffic hits the load balancer. The firewall could be Cisco’s FTD, or 3rd party firewalls from vendors like Checkpoint, Fortinet, VM-Series Next-Generation Firewall from Palo Alto etc, available in the public cloud marketplace. Cloud ACI provides the perfect automation for this use case by providing users with a way to build a multi node service graph. To provide high availability for the firewall, a load balancer may be placed in front of the firewall like shown in the below picture

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Career, Cisco Exam Prep

Cloud ACI can automate the entire flow by managing the lifecycle of both the front end and the Backend LB. It automates the creation of the load balancers, configuring the frontend port/protocol and adding the right backend targets.  As defined by the service chain, it adds the firewall instances as the targets of the Frontend LB. It adds the application servers as the targets of the backend application load balancer (ALB). Cloud APIC also configures the security groups at each layer with the right set of rules based on the contract. This ensures that no un-intended traffic flows between the user and the backend application servers. Can it get better than this! The only configuration that is required from cloud ACI is

◉ creation of the logical devices for the load balancers and firewall

◉ creation of a service graph specifying the location of the service devices in the chain

◉ configuring a contract between the consumer and the backend application server endpoint group

As you can see, this is extremely simple and saves time and reduces configuration complexity for the user. What more, the network admin can be at peace knowing that any dynamic scaling of the backend servers by the application/server admin, will be handled by cloud APIC.

Source: cisco.com

Thursday, 8 April 2021

Designing Fault Tolerant Data Centers of the Future

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

System crashes. Outages. Downtime.

These words send chills down the spines of network administrators. When business apps go down, business leaders are not happy. And the cost can be significant.

Recent IDC survey data shows that enterprises experience two cloud service outages per year. IDC research conservatively puts the average cost of downtime for enterprises at $250,000/hour. Which means just four hours of downtime can cost an enterprise $1 million.

More Info: 300-425: Designing Cisco Enterprise Wireless Networks (ENWLSD)

To respond to failures as quickly as possible, network administrators need a highly scalable, fault tolerant architecture that is simple to manage and troubleshoot.

What’s Required for the Always On Enterprise

Let’s examine some of the key technical capabilities required to meet the “always-on” demand that today’s businesses face. There is a need for:

1. Granular change control mechanisms that facilitate flexible and localized changes, driven by availability models, so that the blast radius of a change is contained by design and intent.

2. Always-on availability to help enable seamless handling and disaster recovery, with failover of infrastructure from one data center to another, or from one data center to a cloud environment.

3. Operational simplicity at scale for connectivity, segmentation, and visibility from a single pane of glass, delivered in a cloud operational model, across distributed environments—including data center, edge, and cloud.

4. Compliance and governance that correlate visibility and control across different domains and provide consistent end-to-end assurance.

5. Policy– driven automation that improves network administrators’ agility and provides control to manage a large-scale environment through a programmable infrastructure.

Typical Network Architecture Design: The Horizontal Approach

With businesses required to be “always on” and closer to users for performance considerations, there is a need to deploy applications in a very distributed fashion. To accomplish this, network architects create distributed mechanisms across multiple data centers. These are on-premises and in the cloud, and across geographic regions, which can help to mitigate the impact of potential failures. This horizontal approach works well by delivering physical layer redundancy built on autonomous systems that rely on a do-it-yourself approach for different layers of the architecture.

However, this design inherently imposes an over-provisioning of the infrastructure, along with an inability to express intent and a lack of coordinated visibility through a single pane of glass.

Some on-premises providers also have marginal fault isolation capabilities and limited-to-no capabilities or solutions for effectively managing multiple data centers.

For example, consider what happens when one data center—or part of the data center—goes down using this horizontal design approach. It is typical to fix this kind of issue in place, increasing the time it takes for application availability, either in the form of application redundancy or availability.

This is not an ideal situation in today’s fast-paced, work-from-anywhere world that demands resiliency and zero downtime.

The Hierarchical Approach: A Better Way to Scale and Isolate

Today’s enterprises rely on software-defined networking and flexible paradigms that support business agility and resiliency. But we live in an imperfect world full of unpredictable events. Is the public cloud down? Do you have a switch failure? Spine switch failure? Or even worse, a whole cluster failure?

Now, imagine a fault-tolerant data center that automatically restores systems after a failure. This may sound like fiction to you but with the right architecture it can be your reality today.

A fault-tolerant data center architecture can survive and provide redundancy across your data center landscapes. In other words, it provides the ultimate in business resiliency, making sure applications are always on, regardless of failure.

The architecture is designed with a multi-level, hierarchical controller cluster that delivers scalability, meets the availability needs of each fault domain, and creates intent-driven policies. This architecture involves several key components:

1. A multi-site orchestrator that pushes high-level policy to the local data center controller—also referred to as a domain controller—and delivers the separation of fault domain and the scale businesses require for global governance with resiliency and federation of data center network.

2. A data center controller/domain controller that operates both on-premises and in the cloud and creates intent-based policies, optimized for local domain requirements.

3. Physical switches with leaf-spine topology for deterministic performance and built-in availability.

4. SmartNIC and Virtual Switches that extend network connectivity and segmentation to the servers, further delivering an intent-driven, high-performing architecture that is closer to the workload.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Nexus Dashboard Orchestrator


Designing Hierarchical Clusters

Using a design comprised of multiple data centers, network operations teams can provision and test policy and validate impact on one data center prior to propagating it across their data centers. This helps to mitigate  propagation of failures and unnecessary impact on business applications. Or, as we like to say, “keep the blast zone aligned with your application design.”

Using hierarchical clusters provides data center level redundancy. Cisco Application Centric Infrastructure (ACI) and the Cisco Nexus Dashboard Orchestrator enable IT to scale up to hundreds of data centers that are located on-premises or deployed across public clouds.

To support greater scale and resilience, most modern controllers use a concept known as data sharding for data stored in the controller. The basic theory behind sharding is that the data repository is split into several database units known as shards. Data stored in a shard is replicated three or more times, with each replica assigned to a separate compute instance.

Typically, network teams tend to focus on hardware redundancy to prevent:

1. Interface failures: Covered using redundant switches and dual attach of servers;

2. Spine switch failure: Covered using ECMP and/or multiple spines;

3. Supervisor, power supply, fan failures: Every component in the system has redundancy built into most of the systems; and

4. Controller cluster failure: Sharded and replicated, thereby covering multiple cluster node failure.

Network operations teams are used to designing multiple redundancies into the hardware infrastructure. But with software-defined everything, we need to make sure that policy and configuration objects are also designed in redundant ways.


Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
BGP Policy

The right way to define intent is to split the network policy—either via Orchestrator or API—in a way that ensures changes are localized to a fault domain as shown by option A (POD level fault domain) or option B (Node level fault domain). Cisco’s Nexus Dashboard Orchestrator enables pre-change validation to show the impact of the change to the network operator before any change is committed.

In case of failure due to configuration changes, the Cisco Nexus Dashboard Orchestrator can roll back the changes and quickly restore the state of the data center to the previously known good state. Designing redundancy at every hardware and software layer enables NetOps to manage failures in a timely manner.

Source: cisco.com

Thursday, 4 March 2021

Enable Consistent Application Services for Containers

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Kubernetes is all about abstracting away complexity. As Kubernetes continues to evolve, it becomes more intelligent and will become even more powerful when it comes to helping enterprises manage their data center, not just at the cloud. While enterprises have had to deal with the challenges associated with managing different types of modern applications (AI/ML, Big data, and analytics) to process that data, they are faced with the challenge to maintain top-level network and security policies and gaining better control of the workload, to ensure operational and functional consistency. This is where Cisco ACI and F5 Container Ingress Services come into the picture.

F5 Container Ingress Services (CIS) and Cisco ACI

Cisco ACI offers these customers an integrated network fabric for Kubernetes. Recently, F5 and Cisco joined forces by integrating F5 CIS with Cisco ACI to bring L4-7 services into the Kubernetes environment, to further simplify the user experience in deploying, scaling, and managing containerized applications. This integration specifically enables:

◉ Unified networking: Containers, VMs, and bare metal

◉ Secure multi-tenancy and seamless integration of Kubernetes network policies and ACI policies

◉ A single point of automation with enhanced visibility for ACI and BIG-IP.

◉ F5 Application Services natively integrated into Container and Platform as a Service (PaaS)Environments

One of the key benefits of such implementation is the ACI encapsulation normalization. The ACI fabric, as the normalizer for the encapsulation, allows you to merge different network technologies or encapsulations be it VLAN or VXLAN into a single policy model. BIG-IP through a simple VLAN connection to ACI, with no need for an additional gateway, can communicate with any service anywhere.


Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Solution Deployment


To integrate F5 CIS with the Cisco ACI for the Kubernetes environment, you perform a series of tasks. Some you perform in the network to set up the Cisco Application Policy Infrastructure Controller (APIC); others you perform on the Kubernetes server(s). Rather than getting down to the nitty-gritty, I will just highlight the steps to deploy the joint solution.

Pre-requisites

The BIG-IP CIS and Cisco ACI joint solution deployment assumes that you have the following in place:

◉ A working Cisco ACI installation

◉ ACI must be integrated with vCenter VDS

◉ Fabric tenant pre-provisioned with the required VRFs/EPGs/L3OUTs.

◉ BIG-IP already running for non-container workload

Deploying Kubernetes Clusters to ACI Fabrics

The following steps will provide you a complete cluster configuration: 

Step 1. Run ACI provisioning tool to prepare Cisco ACI to work with Kubernetes

Cisco provides an acc_provision tool, to provision the fabric for the Kubernetes VMM domain and generate a .yaml file that Kubernetes uses to deploy the required Cisco Application Centric Infrastructure (ACI) container components. If needed, download the provisioning tool.

Next, you can use this provision tool to generate a sample configuration file that you can edit.

$ acc-provision--sample > aci-containers-config.yaml

We can now edit the sample configuration file to provide information from your network. With such a configuration file, now you can run the following command to provision the Cisco ACI fabric:

acc-provision -c aci-containers-config.yaml -o aci-containers.yaml -f kubernetes-<version> -a -u [apic username] -p [apic password]

Step 2. Prepare the ACI CNI Plugin configuration File

The above command also generates the file aci-containers.yaml that you use after installing Kubernetes.

Step 3. Preparing the Kubernetes Nodes – Set up networking for the node to support Kubernetes installation.

With provisioned ACI, you start to prepare networking for the Kubernetes nodes. This includes steps such as Configuring the VMs interface toward the ACI fabric, configuring a static route for the multicast subnet, configuring the DHCP Client to work with ACI, etc.

Step 4. Installing Kubernetes cluster

After you provision Cisco ACI and prepare the Kubernetes nodes, you can install Kubernetes and ACI containers. You can use any installation method you choose appropriate to your environment.

Step 5. Deploy Cisco ACI CNI plugin

When the Kubernetes cluster is up and running, you can copy the preciously generated CNI configuration to the master node, and install the CNI plug-in using the following command:

kubectl apply -f aci-containers.yaml

The command installs the following (PODs):

◉ ACI Containers Host Agent and OpFlex agent in a DaemonSet called aci-containers-host

◉ Open vSwitch in a DaemonSet called aci-containers-openvswitch

◉ ACI Containers Controller in a deployment called aci-containers-controller.

◉ Other required configurations, including service accounts, roles, and security context

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

For ‘the authoritative word on this specific implementation’, you can click here the workflow for integrating k8s into Cisco ACI for the latest and greatest.

After you have performed the previous steps, you can verify the integration in the Cisco APIC GUI. The integration creates a tenant, three EPGs, and a VMM domain. Each tenant will have the visibility of all the Kubernetes POD’s.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

Install the BIG-IP Controller


The F5 BIG-IP Controller (k8s-bigip-ctlr) or Container Ingress Services, if you aren’t familiar, is a Kubernetes native service that provides the glue between container services and BIG-IP. It watches for changes and communicates those to BIG-IP delivered application services. These, in turn, keep up with the changes in container environments and enable the enforcement of security policies.

Once you have a running Kubernetes cluster deployed to ACI Fabric, you can follow these instructions to install BIG-IP Controller.

Use the kubectl get command to verify that the k8s-bigip-ctlr Pod launched successfully.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

BIG-IP as a north-south load balancer for External Services


For Kubernetes services that are exposed externally and need to be load balanced, Kubernetes does not handle the provisioning of the load balancing. It is expected that the load balancing network function is implemented separately. For these services, Cisco ACI takes advantage of the symmetric policy-based redirect (PBR) feature available in the Cisco Nexus 9300-EX or FX leaf switches in ACI mode.

This is where BIG-IP Container Ingress Services (or CIS) comes into the picture, as the north-south load balancer. On ingress, incoming traffic to an externally exposed service is redirected by PBR to BIG-IP for that particular service.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep

If a Kubernetes cluster contains more than one IP pod for a particular service, BIG-IP will load balance the traffic across all the pods for that service. Besides, each new POD is added to BIG-IP pool dynamically.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Exam Prep