Friday, 8 June 2018

Business Outcomes are Driving theJourney from Finger Defined Networking (FDN) to Software Defined Networking (SDN)

Business Outcomes are driving the journey from Finger Defined Networking (FDN) to Software Defined Networking (SDN).

The industry is going through an exponential surge in bandwidth consumption along with high volumes of new devices/subscribers coming on line every day. It is fair to say that the Operation teams of Service Providers will struggle to keep up with adding many more devices every year in their current operating environments. The proliferation of 5G and Internet of Things (IoT) will lead to new business opportunities, but it will depend on Software Defined Networking (SDN) to deliver network performance with broader connectivity. Cisco’s Industry leading Network Services Orchestrator (NSO), Wide Area Networks (WAN) Automation Engine (WAE) and Segment Routing XR Traffic Controller (XTC) are the basic building blocks of our SDN solution within the Cisco Crosswork Automation framework.

Operational benefits of converting from FDN to SDN:


Cisco Certification, Cisco Learning, Cisco Tutorial and Material, SP360: Service Provider, Service Provider, Network

Automation of network functions and speed are needed to  meet the diverse needs of customers with a high level of quality of service.

Network automation: Cisco’s Automation framework helps automate workflows, services and applications – increasing efficiency in network resources and maximised path optimisation. Management and orchestration of network and services are centralised into an extensible orchestration platform by  automating the provisioning and configuration of the entire infrastructure and network services.

Speed and agility: In a rapidly changing network environment, IT policies or resource allocation are evolving all the time. In addition, deployment of new applications and business services has to be fast. With Cisco’s Software-Defined WAN (SD-WAN), networks are managed centrally and rolled out across the enterprise in real time, responding speedily to new business challenges with less bandwidth. To complement, Cisco’s NSO makes it easy to orchestrate application-based service chaining, accelerating delivery.

Orchestrated Assurance: Cisco Network Service Orchestrator solution’s augmented intelligence automatically tests deployed services and proactively monitors service quality from end user point of view providing quality assurance. Service providers can validate SLAs and resolve issues faster, bridging the gap between service fulfillment and assurance.

Business Outcomes from SDN


Cisco Certification, Cisco Learning, Cisco Tutorial and Material, SP360: Service Provider, Service Provider, Network

Our Modular Network Automation framework enables network optimisation and helps deliver use cases that reduces both Capex and Opex.  According to Cisco’s  analysis on automation, some typical results include up to 70 percent improvement in operational efficiency and up to 30 percent revenue uplift. Other business outcomes from SDN include:

Scalability: Deploying large number of network elements, integrating the network, activating services, on-boarding millions of subscribers or IoT devices, managing network operations and service up-time can be achieved with mass-scale automation. Cisco’s Automation Framework delivers complete lifecycle management for all the building blocks of network. It is  automated to minimise human resources and errors, and enables optimal traffic flows through network path optimisation.

Reduced manual errors: Human driven network changes are prone to errors, time consuming, and lack comprehensive validation. This is greatly reduced by automating the provision and configuration of the entire infrastructure and services. This typically reduces opex overhead and technician’s precious time spent on manual work. Embarrassing network/service outage and unpleasant customer experience can also be avoided.

Agility: Augmented intelligence residing in SDN with close loop automation enhances network responsiveness. Application deployment can be as fast as minutes on any platform without compromising user experience. This gives service providers the flexibility to meet network-on-demand offering self-service portals. Delivery of network services are faster and network’s ability to quickly and proactively resolve issues when they arise to ensure customer quality of service. This is also the result of auto-remediation and self-healing with big data analytics and augmented intelligence.

Here are some of the use cases delivered by Cisco’s Transport SDN and Automation framework:

1. Orchestrated Network Optimisation
2. Seamless Network Optimisation (Bandwidth Optimisation)
3. Bandwidth on Demand
4. Operating System Upgrades
5. Device Port turn up
6. Zero Touch provisioning
7. Device and Service migration
8. Metro Ethernet services.

Cisco Certification, Cisco Learning, Cisco Tutorial and Material, SP360: Service Provider, Service Provider, Network

The result of network automation is enhanced customer experience, faster service delivery, with increased business realisation and productivity.

Wednesday, 6 June 2018

Microservices Deployments with Cisco Container Platform

Technological developments in the age of Industry 4.0 are accelerating some business sectors at a head-spinning pace. Innovation is fueling the drive for greater profitability. One way that tech managers are handling these changes is through the use of microservices, enabled by containers. And as usual, Cisco is taking advantage of the latest technologies.

From Cost Center to Profit Center


In this new world, IT departments are being asked to evolve from cost centers to profit centers. However, virtualization and cloud computing are not enough. New services developed in the traditional way often take too long to adapt to existing infrastructures.

Because of such short life cycles, IT professionals need the tools to implement these technologies almost immediately. Sometimes one company may have many cloud providers in a multicloud environment. Containers give IT managers the control they were used to in the data center.

Microservices and Containers


But what if you could break up these entangled IT resources into smaller pieces, then make them work independently on any existing platform? Developers find this new combination of Microservices and containers offers much greater flexibility and scalability. Containers offer significant advantages over mere virtualization. Containers supercharge today’s state-of-the-art hyperconverged platforms and they are cost-effective

A remaining challenge is to get companies to use containers. The adoption of a new technology often depends how easy it is to deploy. One of the early players in container technology is Kubernetes. But getting Kubernetes up and running can be a major task. You can do it the hard way using this tutorial from Kelsey Hightower. Or you can take the easy route, using the Google Container Engine (GKE).

Cisco Container Platform


Another easy-to-use solution is the Cisco Container Platform (CCP). Cisco’s takes advantage of the company’s robust hardware platforms and software orchestration capabilities. CCP uses reliable Cisco equipment that enable users to deploy Kubernetes, with options for adding cloud security, audit tools, and connectivity to hybrid clouds. Notice the growing popularity of the Kubernetes platform in the graph below:

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Microservice, Cisco Study Materials

Use Cases


Space does not permit the inclusion of all the potential use cases of Cisco Container Platform and its accompanying software solution. Here are just a few examples we would like to highlight:

#1: Kubernetes in your Data Center

For agility and scale, nothing beats native Kubernetes. Developers can easily deploy and run container applications without all the puzzle pieces required in traditional deployments. This means a new app can be up and running in minutes rather than days or weeks. Just create one or more Kubernetes clusters in Cisco Container Platform using the graphical user interface. If more capacity is needed for special purposes, simply add new nodes. CCP supports app lifecycle management with Kubernetes clusters and allows for continuous monitoring and logging.

#2: Multi-tier App Deployment Using Jenkins on Kubernetes

Developers are often frustrated because of the time it takes to get their applications into production using traditional methods. But these days it’s critical to get releases out fast. Using open-source solutions, Cisco Container Platform is able to create the continuous integration/continuous delivery (CI/CD) pipeline that developers are looking for. CCP takes advantage of Jenkins, an open-source automation server that runs on a Kubernetes engine.

BayInfotech (BIT) works closely with customers to implement these CI/CD integrations on the Cisco Container Platform. While it may seem complicated, once the infrastructure is set up and running, developers find it easy to create and deploy new code into the system.

#3: Hybrid Cloud with Kubernetes

The Cisco Container Platform makes it easier for customers to deploy and manage container-based applications across hybrid cloud environments. Currently, hybrid cloud environments are is being achieved between HyperFlex as an on-premises data center and GKE as a public cloud.

#4: Persistent Data with Persistent Volumes

Containers are not meant to retain data indefinitely. In the case of deletion, eviction or node failure, all container data may be lost. It involves the use of persistent volumes and persistent volume claims to store data. Further, when a container crashes for any reason, application data will be always retained on the persistent volume. Customer can reuse the persistent volumes to relaunch the application deployment so that customer will never lose the application data.

Sunday, 3 June 2018

Managing a DAA Hub with Analog and Digital Nodes in a Single Context

The building blocks for a distributed access architecture (DAA) are shipping from Cisco. More than 60 customers in 25 countries spanning 4 continents have received key DAA components, such as Remote PHY nodes, Remote PHY shelves, cBR-8 digital cards and Smart PHY automation software. DAA holds much promise to simplify cable operations and improve overall network reliability and makes it easier to manage and configure the cable network and the services that are delivered by the network. As part of DAA, employing Remote PHY devices (RPDs) in nodes are a key element to enable 10G digital optics, Ethernet and IP used for delivering services to nodes.

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Another network element that is key to DAA success is a rack mounted RPD shelf. Rack mounted RPDs are designed to connect analog nodes to digital Converged Cable Access Platform (CCAP) cores. Installed in the hub or headend, they are connected to CCAP cores via 10G digital optical connections routed through Layer2/3 Ethernet switch routers. The output of each rack mount RPD is traditional RF analog broadband, which is connected to analog fiber optics that transmit to and from legacy analog nodes in the access network. Rack mounted RPDs allow digital fiber optics and Ethernet to replace cumbersome RF hub-based coaxial distribution cables and amplifiers that were used to feed analog optical transmitters.

There are two use cases for RPD shelves. The first use case is to enable one CCAP core to serve multiple small and/or distant hubs via digital fiber (i.e. hub site consolidation). The benefits are appreciable savings in both CCAP equipment and operations costs, because RPD shelves enable CCAP processing in fewer locations, using longer distance digital optics between one CCAP core and multiple remote hubs, each with one or more RPD shelf.

However, there is a second, equally valuable benefit of RPD shelves. Consider a network in which a large portion, but not all, of the hub nodes will be upgraded to an N+0 (node + 0), DAA architecture.  For this portion of the network, it doesn’t make economic sense to rebuild and convert existing analog nodes to digital (RPD) nodes. The cable operator is faced with operating and managing a portion of the network with conventional edge QAMs, combining networks and analog optics, while the majority of the network employs digital optics, Ethernet and IP routing to do the same things. Instead of making operations simpler, operations is faced with supporting both the legacy network and the new digital network, having to support two very different operating procedures simultaneously in the same hub.

By using Remote PHY shelves to provide all connectivity to analog nodes, this problem is solved. A single, unified mode of operations is created for the hub, across both the analog and digital portions of the network. Specifically, RF combining networks and amplifiers in the hub can be completely eliminated, replaced by Ethernet switches and digital optics. Video services can be converged with data through the CCAP core if desired. Analog RF outputs from CCAP platforms can be eliminated, and CCAP platforms can be operated as CCAP cores, resulting in a higher service group density per platform. Future node splits can be done in digital, even if the node being split is analog. Simply put, Remote PHY shelves enable a hybrid analog/digital network to be managed as a single DAA network.

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Software and hardware interoperability continue to be essential for enabling a DAA. The Open Remote PHY Device (OpenRPD) initiative was established to stimulate the adoption of a DAA by providing reference software for OpenRPD members, encouraging future OpenRPD devices to be based on interoperable software standards and enabling them to develop OpenRPD devices more quickly than by developing code from scratch. Cisco continues to be a key member of the initiative, openly developing and contributing significant portions of RPD software code to the initiative. To verify that hardware and software interoperability work as advertised, CableLabs® has established thorough CCAP core and RPD interoperability testing. Cable operators looking to migrate to a DAA can look for CableLabs’ stamp of interoperable approval and be confident that the devices they choose will work in a multivendor network. As an active participant in interoperability testing, Cisco is committed to interoperability.

The Distributed Access Architecture is a dramatic evolutionary change in the cable network. It is a step toward cloud-native CCAP and the evolution of cable networks to a Converged Interconnect Network (CIN). With our comprehensive hardware and software portfolio for DAA, including the cBR-8 platform, Remote PHY digital nodes and Smart Digital Nodes, Remote PHY shelves that can be configured for redundant operation, and SmartPHY software, Cisco can help cable operators radically simplify the configuration and management of DAA networks.

Friday, 1 June 2018

Cisco’s Fanless Catalyst 2960-L Switch for Unleashed SMB Performance

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials

Making an investment in IT is more critical today than ever before for a small- to medium-size business. With so many open-air business settings and anywhere, any location workspace bring technology up close and personal. Cisco’s insight into saving  space and reducing noise makes everyone—from librarians to your coworkers—happier than ever.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials
We live in a connected world of phones, laptops and tablets in our hands, and we’re surrounded by our technology of whiteboards, routers, wireless access points, and switches that connect multiple devices on the same network within a building or campus. A switch is necessary because it enables connected devices to share information and talk to each other.

Cisco’s Catalyst 2960-L fanless switch.


Why does a feature like fanless matter? Fanless means quiet and compact. Compact because the use of fans requires airspace and airflow. A fanless switch  can be put in smaller spaces that wouldn’t normally work. A typical network switch is a bit noisy. Some networks range from a hum to what is best described as “helicopter-like whirling. That can be distracting in offices, retail, hospitality or clinics where noise can be an issue.” Being fanless opens up options for smaller organizations to create a robust network in smaller spaces than before.

The Cisco Catalyst 2960-L has been designed for just an environment. The Cisco Catalyst 2960-L Series switch isn’t just any fanless switch: it’s the industry’s first 24-port and 48 port 1 Gbps, POE, fanless switch.

Reliable, Secure and Intuitive


Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials
The Cisco Catalyst 2960-L includes a host of reliability and security features that come with Cisco IOS. And the Cisco Catalyst 2960-L is preloaded with Cisco Configuration Professional for Catalyst (CCPC) built-in. CCPC provides users with an easy-to-use and intuitive graphical interface to configure, manage and monitor a standalone, stack or cluster of Cisco Catalyst switches.

Key features that solve problems for SMBs:

◉ Quiet and cool operations — You won’t even know it’s there

◉ Small form factor — Great for mounting in confined spaces to be inconspicuous for hospitality, cruise ships, healthcare or retail locations.

◉ Perpetual PoE — Power over Ethernet for all connected devices avoids unnecessary power cabling to connect to the switch.

◉ Automatic switch recovery — No touch recovery. You can also configure automatic recovery on the switch to recover from the error-disabled state after the specified period of time.

◉ Bluetooth connectivity — You can access the Command-Line Interface (CLI) through Bluetooth connectivity by pairing the switch to a computer.

◉ Cost-effective connectivity — Ideal for branch offices, wired workspaces and infrastructure networks; conventionally wired workspaces with PC, phones and printers; building infrastructure networks to connect physical security, sensors and control systems; and any application requiring fast Ethernet connectivity and a low total cost of ownership.

◉ Enhanced limited lifetime hardware warranty — Next-business-day delivery of replacement hardware where available and 90 days of 8×5 Cisco Technical Assistance Center.

◉ Built-in web-based GUI: Catalyst 2960-L supports a day-zero GUI called Cisco Configuration Professional for Catalyst (CCPC) to help with easy deployment of the switch without the need for a CLI.

— Simple provisioning
— Easy-to-use diagnostics
— Performance at-a-glance dashboard

With these features, we believe our small business customers can affordably expand their IT reach.

Wednesday, 30 May 2018

Intent-Based Networking in the Cisco Data Center

Cisco Data Center, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

We’ve continued to expand our solutions to deliver Intent-Based Networking to our customers. Our years of designing, building, and operating networks tells us that you just can’t add automation to existing processes. The scale, complexity and new security threat vectors have grown to a point where we need to rethink in some fundamental ways how networks work, and beyond that, how networks and applications interact. Let’s dive in to what Cisco means by Intent-Based Networking, and how it can help you run your data center more efficiently and more intelligently for your business.

Networking is shifting from a box-by-box, configure-monitor-troubleshoot model to a model where the network globally understands the intent, or requirements, that need to be satisfied, automatically realizes them, and continuously verifies the requirements are met. This process has 3 key functions – translation, activation, and assurance.

Understanding the “intent” cycle in the data center


Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

Translation: The Translation function begins with the capture of intent. Intent is a high-level, global expression of what behavior is expected from the infrastructure to support business rules and applications. Intent can be captured from multiple locations; for instance, users may directly provide requirements or built-in profiling tools, capable of analyzing behavior in the network and workloads, may automatically generate them. Once intent is captured, it must be translated to policy and validated against what the infrastructure can implement.

Activation: The Activation function installs these policies and configurations throughout the infrastructure in an automated manner. This covers not just the network elements – physical and virtual switches, routers, etc. – but also covers software-based agents installed directly in the workload. Additionally, as datacenter networks become multicloud, this must work across multiple datacenters, colocation environments, and even public clouds.

Assurance: The last function, Assurance, is an important part of what makes Intent-Based Networking unique. It’s a new function we’ve never been able to offer in the network before. Assurance is the continuous verification, derived from network context and telemetry, that the network is operating in alignment with the programmed intent. It offers a continuous ground truth about not just what’s happening but also what’s possible in your network. It helps you confidently make changes with the advanced knowledge of how they will impact your infrastructure.

What Intent-Based Networking means for the data center


Now let’s think about Intent-Based Networking and its translation, activation, and assurance functions in the context of some of our datacenter products, Cisco ACI, Nexus 9000, Network Assurance Engine (NAE), and Tetration.

Cisco ACI offers a policy-based SDN fabric capable of providing translation and activation functions for the network. The Application Policy Infrastructure Controller (APIC) exposes a policy model abstraction that can be used to capture higher level requirements and automatically convert them into low level or concrete configuration. This configuration is automatically and transparently communicated to the network infrastructure, including Nexus 9000 switches, as part of the activation process.

Cisco Network Assurance Engine fulfills the assurance function in the network. NAE was designed to integrate with both the network devices as well as a network controller such as the APIC. NAE reads policy and configuration state from APIC as well as configuration, dynamic and hardware state on each device. Using this information to build a mathematical model of the network, NAE is able to proactively and continuously verify that the network is behaving in accordance with the operator intent and policy captured in the APIC. By codifying knowledge of thousands of built-in failure scenarios that run continuously against the model, NAE can identify problems in the network before they lead to outages and provide a path to remediation. It is precisely this closed-loop behavior that characterizes an Intent-Based Networking design.

Cisco Tetration contributes to multiple functions in an Intent-Based Network at an application and workload level. Its application dependency mapping capabilities play a critical role in profiling applications and ultimately capturing intent. Its cloud workload security and segmentation capabilities provide a means of delivering (or activating) a highly automated, zero-trust security environment. This includes advanced capabilities such as detecting software vulnerabilities, identifying deviations in process behavior in addition to building whitelist segmentation policies based on real-time telemetry. And Tetration’s network performance, insight, and forensic capabilities provide visibility and assurance of what is occurring in your environment.  It can described as a time machine or “DVR” due to its ability to play back past network behavior and model future trends.

Friday, 25 May 2018

7 Cisco Strategies for Overcoming Common Cloud Adoption Challenges

The recently released Cisco Global Cloud Index study predicts that by 2021, 94 percent of all workloads and compute instances will be processed in the cloud. Public cloud is expected to grow faster than private cloud and by 2021 the majority share of workloads and compute instances will live in the cloud. Many organizations are expected to adopt a hybrid approach to cloud as they transition some workloads and compute instances from internally managed private clouds to externally managed public clouds.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

While Cloud represents incredible opportunity for organizations, the cloud services provider (CSP) market continues to be very competitive. CSPs are increasingly focused on specialization and differentiating themselves through their core services portfolio as well as their vertical specific offerings.

CIOs and CTOs are therefore faced with having to determine the right mix of cloud services and integrating the selected services into their existing IT portfolio. Multicloud adoption is a journey and it is one that can be met with numerous challenges.

Below are the 7 common cloud adoption challenges we have observed and strategies to overcome each.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Adopt a common architectural framework that provides a common language between business and IT
◈ Think in terms of the city analogy – establish a governance model that will drive appropriate consideration of multiple perspectives
◈ Align investment decision making so that architectural impact is considered

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Plan for changes in your operating model
◈ Consider changes based on the Cisco Operating Model Transformation Map
◈ Execute changes across five key streams
     ◈ Image of Success
     ◈ Change Leadership
     ◈ Metrics
     ◈ Roles & Responsibilities
     ◈ Costing

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Shift from traditional waterfall funding methods to more agile funding processes
◈ Understand the TCO for existing and future services
◈ Develop an understanding of potential cloud providers’ cost structure
◈ Understand what hardware internal services are currently running on ANDwhere that equipment is in the lifecycle
◈ Develop a single pain of glass view that showcases current cloud consumption

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Your cloud strategy must deliver the right operational and financial outcomes:

◈ Understand and align business and IT priorities
◈ Develop appropriate prioritization / sequencing
◈ Build the value case for your proposed approach
◈ Create an implementation plan that delivers incremental value rapidly
◈ Validate value achievement

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Maintain an architectural perspective
◈ Align Technology to the Business Needs
◈ Technical Agility Creates Business Agility
◈ Optimize Tactical Technical Decisions into Strategic Technical Architecture
◈ Over-engineering vs. no engineering, choose carefully
◈ Fail Fast to Win Quick and be ready to adjust
◈ Include a Continuous Improvement Model through a project based Feedback Loop

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Make sure you are aligned to your “why” and can assess options based on value
◈ Invest the time to create a migration strategy that contemplates options and tradeoffs rather than just lifting and shifting
◈ Invest some effort to understand or validate your current environment
◈ Understand the elements of a services approach and consider what you can adopt

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Ensure your change management plan includes a description of the new value delivery model
◈ Paint a picture of the future state that is broadly understood throughout the organization
◈ Define and share new roles and responsibilities
◈ Anticipate the impact of automation on previous processes and plan for the migration of resources to higher value efforts
◈ Publicize the successful shifting of people to new (and more valuable) roles

Organizations may encounter the need for one, some or all of these strategies based on their adoption roadmap.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Cisco Cloud Advisory Services can help organizations navigate through these challenges and establish an actionable multicloud strategy.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Wednesday, 23 May 2018

Multicloud Workload Protection – Cisco Tetration Welcomes Container Workloads

The modern data center has evolved in a brief period of time into the complex environments seen today, with extremely fast, high-density switching pushing large volumes of traffic, and multiple layers of virtualization and overlays.  The result – a highly abstract network that can be difficult to monitor, secure and troubleshoot.  At the same time, networking, security, operations and applications teams are being asked to increase their operational efficiency and secure an ever-expanding attack surface inside the Data Center.  Cisco Tetration™ is a modern approach to solving these challenges without compromising agility.

It’s been almost two years since Cisco publicly announced Cisco Tetration™.  And, after eight releases of code, there are many new innovations, deployment options, and new capabilities to be excited about.

Container use is one of the fastest growing technology trends inside data centers.  With the recently released Cisco Tetration code (version 2.3.x), containers join an already comprehensive list of streaming telemetry sources for data collection.  Cisco Tetration now supports visibility and enforcement for container workloads. . . and thus, the focus of this blog.

Protecting data center workloads 


Most cybersecurity experts agree that data centers are especially susceptible to lateral attacks from bad actors who attempt to take advantage of security gaps or lack of controls for east-west traffic flows.  Segmentation, whitelisting, zero-trust, micro-segmentation, and application segmentation are all terms used to describe a security model that, by default, has a “deny all,” catch-all policy – an effective defense against lateral attacks.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

However, segmentation is the final act, so to speak.  The opening act? Discovery of policies and inventory through empirical data (traffic flows on the network and host/workload contextual data) to accurately build, validate, and enforce intent.

To better appreciate the importance of segmentation, Tim Garner, a technical marketing engineer from the Cisco Tetration Business Unit has put together an excellent blog that explains how to achieve good data center hygiene.

Important takeaway #1:  To reduce the overall attack surface inside the data center, the blast radius of any compromised endpoint must be limited by eliminating any unnecessary lateral communication. The discovery and implementation of extremely fine-grained security policies is an effective but not easily achieved approach.

Important takeaway #2:  A holistic approach to hybrid cloud workload security must be agnostic to infrastructure and inclusive of current and future-facing workloads.

Containers are one of the fastest growing technology trends inside the Data Center.  To learn more about how Cisco Tetration can provide lateral security for hybrid cloud workloads inclusive of containers, read on!!!

On to container support within Cisco Tetration . . .

The objective?  To demonstrate visibility and enforcement inclusive of current and future workloads – that is, workloads that are both virtual and containerized. To simulate a real-world application, the following deployment of a WordPress application called BorgPress is leveraged.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A typical, but often difficult to keep up with, approach to tracking the evolution of an application’s lifespan is by using a logical application flow diagram. The diagram below documents the logical flow between the application tiers of BorgPress.  Network or security engineers responsible for implementing the security rules that allow required network communications through a firewall or security engine rely on such diagrams.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A quickly growing trend by developers is the adoption of Kubernetes as an open-source platform (from Google) for managing containerized applications and services.  Bare metal servers still play a significant role 15 years after virtualization technology arrived.  It’s expected that, as container adoption occurs, applications will be deployed as hybrids – a combination of bare metal, virtual, and containerized workloads.  Therefore, BorgPress is deployed as a hybrid.

A wordpress web tier of BorgPress is deployed as containers inside a Kubernetes cluster.  The proxies and database tiers are deployed as virtual machines.

The Kubernetes environment is made up of one master node and two worker nodes.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Discovery of application policies is a more manageable task for containerized applications as compared to traditional workload types (bare metal or virtual machines).  This is because container orchestrators leverage declarative object configuration files to deploy applications. These files contain embedded information regarding which ports are to be used.  For example, BorgPress uses a YAML file— specifically, a replica set object, as shown in the figure below—to describe the number of wordpress containers to deploy and on which port (port 80) to expose the container.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To allow external users access to the BorgPress application, Kubernetes leverages an external service object type of NodePort to expose a dynamic port within a default range of 30000‒32767.  Traffic received by the Kubernetes worker nodes destined to port 30000 (the service defined to listen to incoming requests for BorgPress) will be load-balanced to one of the three BorgPress endpoints.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Orchestrator integration

In a container eco-system, workloads are mutable and often short-lived.  IP addresses come and go. The same IP that is assigned to workload A might, in a blink of an eye, be now assigned to workload B. As such, the policies in a container environment must be flexible and capable of being applied dynamically.  A declarative policy that is abstract hides the underlying complexity.  Lower-level constructs, such as IP addresses, are given context, for example through the use of labels, tags, or annotations.  This allows humans to describe a simplified policy and systems to translate that policy.

Cisco Tetration supports an automated method of adding meaningful context through user annotations.  These annotations can be manually uploaded or dynamically learned in real time from external orchestration systems.  The following orchestrators are supported by Cisco Tetration (others can also be integrated through an open RESTful API):

◈ VMWare vCenter
◈ Amazon Web Services

In addition, Kubernetes and OpenShift now are also supported external orchestrators.  When an external orchestrator is added (through Cisco Tetration’s user interface) for a Kubernetes or OpenShift cluster, Cisco Tetration connects to the cluster’s API server and ingests metadata, which is automatically converted to annotations prefixed with an “orchestrator_” tag.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

In the example below, filters are created and used within the BorgPress application workspace to build abstract security rules that, when enforced, implement a zero-trust policy.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Data collection and flow search
To support container workloads, the same Cisco Tetration agent used on the host OS to collect flow and process information is now also aware and capable of doing the same for containers.  Flows are stored inside a data lake that can be queried using out-of-the-box filters or directly from annotations learned from the Kubernetes cluster.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policy definition and enforcement

Application workspaces are objects for defining, analyzing, and enforcing policies for a particular application.  BorgPress contains a total of 6 virtual machines, 3 containers, and 15 IP addresses.

Scopes are used to determine the set of endpoints that are pulled into the application workspace and thus are affected by the created policies that are later enforced.

In the example below, a scope, BorgPress, is created that identifies any endpoint that matches the four defined queries.  The queries for the BorgPress scope are based on custom annotations that have been both manually uploaded and learned dynamically.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Once a scope is created, the application workspace is built and associated to the scope.  In the example below, a BorgPress application workspace is created and tied to the BorgPress scope.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policies using prebuilt filters inside the application workspace are defined to build segmentation rules.  In the example below, five default policies have been built that define the set of rules for BorgPress to function based on the logical application diagram discussed earlier. The orange boxes are with a red border are filters that describe the BorgPress wordpress tier that abstracts or contains container endpoints.  The highlighted yellow box shows a single rule that allows any BorgPress database server (there are three virtual machine endpoints in this tier) to provide a service on port 3306 to the consumer –  which is a BorgPress database HAProxy server.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To validate these policies, live policy analysis is used to cross-examine every packet of a flow against the five policies or intents and then classify each as permitted, rejected, escaped, or misdropped by the network.  This is performed in near-real time and for all endpoints of the BorgPress application.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

It’s important to point out that up to this point there is no actual enforcement of policies.  Traffic classification is just a record of what occurred on the network as it relates to the intentions of the policy you would like to enforce.  This allows you to be certain that the rules you ultimately enforce will work as intended.  Through a single click of a button, Cisco Tetration can provide holistic enforcement for BorgPress across both virtual and containerized workloads.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Every rule does not need to be implemented on every endpoint.  Once “Enforce Policies” is enabled, each endpoint, through a secure channel to the agent, receives only its required set of rules.  The agent leverages the native firewall on the host OS (iptables or Windows firewall) to translate and implement policies.

The set of rules can be viewed from within the Cisco Tetration user interface or directly from the endpoint.  In the example below, the rules received and enforced for the BorgPress database endpoint db-mysql01, a virtual machine, are shown.  The rules match exactly the policy built inside the application workspace and are translated into the correct IPs on the endpoint using iptables.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Now that we’ve seen the rules enforced in a virtual machine for BorgPress, let’s look at how enforcement is done on containers.  Enforcement for containers happens at the container namespace level. Since BorgPress is a Kubernetes deployment, enforcement happens at the pod level.  BorgPress has three wordpress pods running in the default namespace.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Just as with virtual machines, we can view the enforcement rules both using the Cisco Tetration user interface or on the endpoint.  In the example below, the user interface is showing the host profile of one of the Kubernetes worker nodes: k8s-node02.  With container support, a new tab next to the Enforcement tab (“Container Enforcement”) shows the list of rules enforced to each pod.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

At this point all endpoints, both virtual and container, have the necessary enforcement rules, and BorgPress is almost deployed with a zero-trust security model.  Earlier I discussed the use of a type of Kubernetes service object called a NodePort.  Its purpose is to expose the BorgPress wordpress service to external (outside the cluster) users.  As the logical application flow diagram illustrates, the Web-HAProxy receives incoming client requests and load-balances them to the NodePort that every Kubernetes worker node listens on.  Since the NodePort is a dynamically generated high-end port number, it can change over time.  This presents a problem.  To make sure the Web-Haproxy always has the correct rule to allow outgoing traffic to the NodePort, Cisco Tetration learns about the NodePort though the external orchestrator.  When policy is pushed to the Web-HAProxy, Cisco Tetration also pushes the correct rule to allow traffic to the NodePort.  If you noticed from the application workspace image earlier, there is no policy definition or rule for the NodePort 30000 to allow communication from Web-HAProxy to BP-Web-Tier.  However, looking at the iptables of Web-HAProxy (see figure below), you can see that Cisco Tetration correctly added a rule to allow outgoing traffic to port 30000.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips