Sunday, 3 June 2018

Managing a DAA Hub with Analog and Digital Nodes in a Single Context

The building blocks for a distributed access architecture (DAA) are shipping from Cisco. More than 60 customers in 25 countries spanning 4 continents have received key DAA components, such as Remote PHY nodes, Remote PHY shelves, cBR-8 digital cards and Smart PHY automation software. DAA holds much promise to simplify cable operations and improve overall network reliability and makes it easier to manage and configure the cable network and the services that are delivered by the network. As part of DAA, employing Remote PHY devices (RPDs) in nodes are a key element to enable 10G digital optics, Ethernet and IP used for delivering services to nodes.

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Another network element that is key to DAA success is a rack mounted RPD shelf. Rack mounted RPDs are designed to connect analog nodes to digital Converged Cable Access Platform (CCAP) cores. Installed in the hub or headend, they are connected to CCAP cores via 10G digital optical connections routed through Layer2/3 Ethernet switch routers. The output of each rack mount RPD is traditional RF analog broadband, which is connected to analog fiber optics that transmit to and from legacy analog nodes in the access network. Rack mounted RPDs allow digital fiber optics and Ethernet to replace cumbersome RF hub-based coaxial distribution cables and amplifiers that were used to feed analog optical transmitters.

There are two use cases for RPD shelves. The first use case is to enable one CCAP core to serve multiple small and/or distant hubs via digital fiber (i.e. hub site consolidation). The benefits are appreciable savings in both CCAP equipment and operations costs, because RPD shelves enable CCAP processing in fewer locations, using longer distance digital optics between one CCAP core and multiple remote hubs, each with one or more RPD shelf.

However, there is a second, equally valuable benefit of RPD shelves. Consider a network in which a large portion, but not all, of the hub nodes will be upgraded to an N+0 (node + 0), DAA architecture.  For this portion of the network, it doesn’t make economic sense to rebuild and convert existing analog nodes to digital (RPD) nodes. The cable operator is faced with operating and managing a portion of the network with conventional edge QAMs, combining networks and analog optics, while the majority of the network employs digital optics, Ethernet and IP routing to do the same things. Instead of making operations simpler, operations is faced with supporting both the legacy network and the new digital network, having to support two very different operating procedures simultaneously in the same hub.

By using Remote PHY shelves to provide all connectivity to analog nodes, this problem is solved. A single, unified mode of operations is created for the hub, across both the analog and digital portions of the network. Specifically, RF combining networks and amplifiers in the hub can be completely eliminated, replaced by Ethernet switches and digital optics. Video services can be converged with data through the CCAP core if desired. Analog RF outputs from CCAP platforms can be eliminated, and CCAP platforms can be operated as CCAP cores, resulting in a higher service group density per platform. Future node splits can be done in digital, even if the node being split is analog. Simply put, Remote PHY shelves enable a hybrid analog/digital network to be managed as a single DAA network.

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Software and hardware interoperability continue to be essential for enabling a DAA. The Open Remote PHY Device (OpenRPD) initiative was established to stimulate the adoption of a DAA by providing reference software for OpenRPD members, encouraging future OpenRPD devices to be based on interoperable software standards and enabling them to develop OpenRPD devices more quickly than by developing code from scratch. Cisco continues to be a key member of the initiative, openly developing and contributing significant portions of RPD software code to the initiative. To verify that hardware and software interoperability work as advertised, CableLabs® has established thorough CCAP core and RPD interoperability testing. Cable operators looking to migrate to a DAA can look for CableLabs’ stamp of interoperable approval and be confident that the devices they choose will work in a multivendor network. As an active participant in interoperability testing, Cisco is committed to interoperability.

The Distributed Access Architecture is a dramatic evolutionary change in the cable network. It is a step toward cloud-native CCAP and the evolution of cable networks to a Converged Interconnect Network (CIN). With our comprehensive hardware and software portfolio for DAA, including the cBR-8 platform, Remote PHY digital nodes and Smart Digital Nodes, Remote PHY shelves that can be configured for redundant operation, and SmartPHY software, Cisco can help cable operators radically simplify the configuration and management of DAA networks.

Friday, 1 June 2018

Cisco’s Fanless Catalyst 2960-L Switch for Unleashed SMB Performance

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials

Making an investment in IT is more critical today than ever before for a small- to medium-size business. With so many open-air business settings and anywhere, any location workspace bring technology up close and personal. Cisco’s insight into saving  space and reducing noise makes everyone—from librarians to your coworkers—happier than ever.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials
We live in a connected world of phones, laptops and tablets in our hands, and we’re surrounded by our technology of whiteboards, routers, wireless access points, and switches that connect multiple devices on the same network within a building or campus. A switch is necessary because it enables connected devices to share information and talk to each other.

Cisco’s Catalyst 2960-L fanless switch.


Why does a feature like fanless matter? Fanless means quiet and compact. Compact because the use of fans requires airspace and airflow. A fanless switch  can be put in smaller spaces that wouldn’t normally work. A typical network switch is a bit noisy. Some networks range from a hum to what is best described as “helicopter-like whirling. That can be distracting in offices, retail, hospitality or clinics where noise can be an issue.” Being fanless opens up options for smaller organizations to create a robust network in smaller spaces than before.

The Cisco Catalyst 2960-L has been designed for just an environment. The Cisco Catalyst 2960-L Series switch isn’t just any fanless switch: it’s the industry’s first 24-port and 48 port 1 Gbps, POE, fanless switch.

Reliable, Secure and Intuitive


Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Study Materials
The Cisco Catalyst 2960-L includes a host of reliability and security features that come with Cisco IOS. And the Cisco Catalyst 2960-L is preloaded with Cisco Configuration Professional for Catalyst (CCPC) built-in. CCPC provides users with an easy-to-use and intuitive graphical interface to configure, manage and monitor a standalone, stack or cluster of Cisco Catalyst switches.

Key features that solve problems for SMBs:

◉ Quiet and cool operations — You won’t even know it’s there

◉ Small form factor — Great for mounting in confined spaces to be inconspicuous for hospitality, cruise ships, healthcare or retail locations.

◉ Perpetual PoE — Power over Ethernet for all connected devices avoids unnecessary power cabling to connect to the switch.

◉ Automatic switch recovery — No touch recovery. You can also configure automatic recovery on the switch to recover from the error-disabled state after the specified period of time.

◉ Bluetooth connectivity — You can access the Command-Line Interface (CLI) through Bluetooth connectivity by pairing the switch to a computer.

◉ Cost-effective connectivity — Ideal for branch offices, wired workspaces and infrastructure networks; conventionally wired workspaces with PC, phones and printers; building infrastructure networks to connect physical security, sensors and control systems; and any application requiring fast Ethernet connectivity and a low total cost of ownership.

◉ Enhanced limited lifetime hardware warranty — Next-business-day delivery of replacement hardware where available and 90 days of 8×5 Cisco Technical Assistance Center.

◉ Built-in web-based GUI: Catalyst 2960-L supports a day-zero GUI called Cisco Configuration Professional for Catalyst (CCPC) to help with easy deployment of the switch without the need for a CLI.

— Simple provisioning
— Easy-to-use diagnostics
— Performance at-a-glance dashboard

With these features, we believe our small business customers can affordably expand their IT reach.

Wednesday, 30 May 2018

Intent-Based Networking in the Cisco Data Center

Cisco Data Center, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

We’ve continued to expand our solutions to deliver Intent-Based Networking to our customers. Our years of designing, building, and operating networks tells us that you just can’t add automation to existing processes. The scale, complexity and new security threat vectors have grown to a point where we need to rethink in some fundamental ways how networks work, and beyond that, how networks and applications interact. Let’s dive in to what Cisco means by Intent-Based Networking, and how it can help you run your data center more efficiently and more intelligently for your business.

Networking is shifting from a box-by-box, configure-monitor-troubleshoot model to a model where the network globally understands the intent, or requirements, that need to be satisfied, automatically realizes them, and continuously verifies the requirements are met. This process has 3 key functions – translation, activation, and assurance.

Understanding the “intent” cycle in the data center


Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

Translation: The Translation function begins with the capture of intent. Intent is a high-level, global expression of what behavior is expected from the infrastructure to support business rules and applications. Intent can be captured from multiple locations; for instance, users may directly provide requirements or built-in profiling tools, capable of analyzing behavior in the network and workloads, may automatically generate them. Once intent is captured, it must be translated to policy and validated against what the infrastructure can implement.

Activation: The Activation function installs these policies and configurations throughout the infrastructure in an automated manner. This covers not just the network elements – physical and virtual switches, routers, etc. – but also covers software-based agents installed directly in the workload. Additionally, as datacenter networks become multicloud, this must work across multiple datacenters, colocation environments, and even public clouds.

Assurance: The last function, Assurance, is an important part of what makes Intent-Based Networking unique. It’s a new function we’ve never been able to offer in the network before. Assurance is the continuous verification, derived from network context and telemetry, that the network is operating in alignment with the programmed intent. It offers a continuous ground truth about not just what’s happening but also what’s possible in your network. It helps you confidently make changes with the advanced knowledge of how they will impact your infrastructure.

What Intent-Based Networking means for the data center


Now let’s think about Intent-Based Networking and its translation, activation, and assurance functions in the context of some of our datacenter products, Cisco ACI, Nexus 9000, Network Assurance Engine (NAE), and Tetration.

Cisco ACI offers a policy-based SDN fabric capable of providing translation and activation functions for the network. The Application Policy Infrastructure Controller (APIC) exposes a policy model abstraction that can be used to capture higher level requirements and automatically convert them into low level or concrete configuration. This configuration is automatically and transparently communicated to the network infrastructure, including Nexus 9000 switches, as part of the activation process.

Cisco Network Assurance Engine fulfills the assurance function in the network. NAE was designed to integrate with both the network devices as well as a network controller such as the APIC. NAE reads policy and configuration state from APIC as well as configuration, dynamic and hardware state on each device. Using this information to build a mathematical model of the network, NAE is able to proactively and continuously verify that the network is behaving in accordance with the operator intent and policy captured in the APIC. By codifying knowledge of thousands of built-in failure scenarios that run continuously against the model, NAE can identify problems in the network before they lead to outages and provide a path to remediation. It is precisely this closed-loop behavior that characterizes an Intent-Based Networking design.

Cisco Tetration contributes to multiple functions in an Intent-Based Network at an application and workload level. Its application dependency mapping capabilities play a critical role in profiling applications and ultimately capturing intent. Its cloud workload security and segmentation capabilities provide a means of delivering (or activating) a highly automated, zero-trust security environment. This includes advanced capabilities such as detecting software vulnerabilities, identifying deviations in process behavior in addition to building whitelist segmentation policies based on real-time telemetry. And Tetration’s network performance, insight, and forensic capabilities provide visibility and assurance of what is occurring in your environment.  It can described as a time machine or “DVR” due to its ability to play back past network behavior and model future trends.

Friday, 25 May 2018

7 Cisco Strategies for Overcoming Common Cloud Adoption Challenges

The recently released Cisco Global Cloud Index study predicts that by 2021, 94 percent of all workloads and compute instances will be processed in the cloud. Public cloud is expected to grow faster than private cloud and by 2021 the majority share of workloads and compute instances will live in the cloud. Many organizations are expected to adopt a hybrid approach to cloud as they transition some workloads and compute instances from internally managed private clouds to externally managed public clouds.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

While Cloud represents incredible opportunity for organizations, the cloud services provider (CSP) market continues to be very competitive. CSPs are increasingly focused on specialization and differentiating themselves through their core services portfolio as well as their vertical specific offerings.

CIOs and CTOs are therefore faced with having to determine the right mix of cloud services and integrating the selected services into their existing IT portfolio. Multicloud adoption is a journey and it is one that can be met with numerous challenges.

Below are the 7 common cloud adoption challenges we have observed and strategies to overcome each.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Adopt a common architectural framework that provides a common language between business and IT
◈ Think in terms of the city analogy – establish a governance model that will drive appropriate consideration of multiple perspectives
◈ Align investment decision making so that architectural impact is considered

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Plan for changes in your operating model
◈ Consider changes based on the Cisco Operating Model Transformation Map
◈ Execute changes across five key streams
     ◈ Image of Success
     ◈ Change Leadership
     ◈ Metrics
     ◈ Roles & Responsibilities
     ◈ Costing

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Shift from traditional waterfall funding methods to more agile funding processes
◈ Understand the TCO for existing and future services
◈ Develop an understanding of potential cloud providers’ cost structure
◈ Understand what hardware internal services are currently running on ANDwhere that equipment is in the lifecycle
◈ Develop a single pain of glass view that showcases current cloud consumption

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Your cloud strategy must deliver the right operational and financial outcomes:

◈ Understand and align business and IT priorities
◈ Develop appropriate prioritization / sequencing
◈ Build the value case for your proposed approach
◈ Create an implementation plan that delivers incremental value rapidly
◈ Validate value achievement

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Maintain an architectural perspective
◈ Align Technology to the Business Needs
◈ Technical Agility Creates Business Agility
◈ Optimize Tactical Technical Decisions into Strategic Technical Architecture
◈ Over-engineering vs. no engineering, choose carefully
◈ Fail Fast to Win Quick and be ready to adjust
◈ Include a Continuous Improvement Model through a project based Feedback Loop

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Make sure you are aligned to your “why” and can assess options based on value
◈ Invest the time to create a migration strategy that contemplates options and tradeoffs rather than just lifting and shifting
◈ Invest some effort to understand or validate your current environment
◈ Understand the elements of a services approach and consider what you can adopt

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

◈ Ensure your change management plan includes a description of the new value delivery model
◈ Paint a picture of the future state that is broadly understood throughout the organization
◈ Define and share new roles and responsibilities
◈ Anticipate the impact of automation on previous processes and plan for the migration of resources to higher value efforts
◈ Publicize the successful shifting of people to new (and more valuable) roles

Organizations may encounter the need for one, some or all of these strategies based on their adoption roadmap.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Cisco Cloud Advisory Services can help organizations navigate through these challenges and establish an actionable multicloud strategy.

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Guides, Cisco Cloud, Cisco Applications

Wednesday, 23 May 2018

Multicloud Workload Protection – Cisco Tetration Welcomes Container Workloads

The modern data center has evolved in a brief period of time into the complex environments seen today, with extremely fast, high-density switching pushing large volumes of traffic, and multiple layers of virtualization and overlays.  The result – a highly abstract network that can be difficult to monitor, secure and troubleshoot.  At the same time, networking, security, operations and applications teams are being asked to increase their operational efficiency and secure an ever-expanding attack surface inside the Data Center.  Cisco Tetration™ is a modern approach to solving these challenges without compromising agility.

It’s been almost two years since Cisco publicly announced Cisco Tetration™.  And, after eight releases of code, there are many new innovations, deployment options, and new capabilities to be excited about.

Container use is one of the fastest growing technology trends inside data centers.  With the recently released Cisco Tetration code (version 2.3.x), containers join an already comprehensive list of streaming telemetry sources for data collection.  Cisco Tetration now supports visibility and enforcement for container workloads. . . and thus, the focus of this blog.

Protecting data center workloads 


Most cybersecurity experts agree that data centers are especially susceptible to lateral attacks from bad actors who attempt to take advantage of security gaps or lack of controls for east-west traffic flows.  Segmentation, whitelisting, zero-trust, micro-segmentation, and application segmentation are all terms used to describe a security model that, by default, has a “deny all,” catch-all policy – an effective defense against lateral attacks.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

However, segmentation is the final act, so to speak.  The opening act? Discovery of policies and inventory through empirical data (traffic flows on the network and host/workload contextual data) to accurately build, validate, and enforce intent.

To better appreciate the importance of segmentation, Tim Garner, a technical marketing engineer from the Cisco Tetration Business Unit has put together an excellent blog that explains how to achieve good data center hygiene.

Important takeaway #1:  To reduce the overall attack surface inside the data center, the blast radius of any compromised endpoint must be limited by eliminating any unnecessary lateral communication. The discovery and implementation of extremely fine-grained security policies is an effective but not easily achieved approach.

Important takeaway #2:  A holistic approach to hybrid cloud workload security must be agnostic to infrastructure and inclusive of current and future-facing workloads.

Containers are one of the fastest growing technology trends inside the Data Center.  To learn more about how Cisco Tetration can provide lateral security for hybrid cloud workloads inclusive of containers, read on!!!

On to container support within Cisco Tetration . . .

The objective?  To demonstrate visibility and enforcement inclusive of current and future workloads – that is, workloads that are both virtual and containerized. To simulate a real-world application, the following deployment of a WordPress application called BorgPress is leveraged.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A typical, but often difficult to keep up with, approach to tracking the evolution of an application’s lifespan is by using a logical application flow diagram. The diagram below documents the logical flow between the application tiers of BorgPress.  Network or security engineers responsible for implementing the security rules that allow required network communications through a firewall or security engine rely on such diagrams.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

A quickly growing trend by developers is the adoption of Kubernetes as an open-source platform (from Google) for managing containerized applications and services.  Bare metal servers still play a significant role 15 years after virtualization technology arrived.  It’s expected that, as container adoption occurs, applications will be deployed as hybrids – a combination of bare metal, virtual, and containerized workloads.  Therefore, BorgPress is deployed as a hybrid.

A wordpress web tier of BorgPress is deployed as containers inside a Kubernetes cluster.  The proxies and database tiers are deployed as virtual machines.

The Kubernetes environment is made up of one master node and two worker nodes.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Discovery of application policies is a more manageable task for containerized applications as compared to traditional workload types (bare metal or virtual machines).  This is because container orchestrators leverage declarative object configuration files to deploy applications. These files contain embedded information regarding which ports are to be used.  For example, BorgPress uses a YAML file— specifically, a replica set object, as shown in the figure below—to describe the number of wordpress containers to deploy and on which port (port 80) to expose the container.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To allow external users access to the BorgPress application, Kubernetes leverages an external service object type of NodePort to expose a dynamic port within a default range of 30000‒32767.  Traffic received by the Kubernetes worker nodes destined to port 30000 (the service defined to listen to incoming requests for BorgPress) will be load-balanced to one of the three BorgPress endpoints.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Orchestrator integration

In a container eco-system, workloads are mutable and often short-lived.  IP addresses come and go. The same IP that is assigned to workload A might, in a blink of an eye, be now assigned to workload B. As such, the policies in a container environment must be flexible and capable of being applied dynamically.  A declarative policy that is abstract hides the underlying complexity.  Lower-level constructs, such as IP addresses, are given context, for example through the use of labels, tags, or annotations.  This allows humans to describe a simplified policy and systems to translate that policy.

Cisco Tetration supports an automated method of adding meaningful context through user annotations.  These annotations can be manually uploaded or dynamically learned in real time from external orchestration systems.  The following orchestrators are supported by Cisco Tetration (others can also be integrated through an open RESTful API):

◈ VMWare vCenter
◈ Amazon Web Services

In addition, Kubernetes and OpenShift now are also supported external orchestrators.  When an external orchestrator is added (through Cisco Tetration’s user interface) for a Kubernetes or OpenShift cluster, Cisco Tetration connects to the cluster’s API server and ingests metadata, which is automatically converted to annotations prefixed with an “orchestrator_” tag.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

In the example below, filters are created and used within the BorgPress application workspace to build abstract security rules that, when enforced, implement a zero-trust policy.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Data collection and flow search
To support container workloads, the same Cisco Tetration agent used on the host OS to collect flow and process information is now also aware and capable of doing the same for containers.  Flows are stored inside a data lake that can be queried using out-of-the-box filters or directly from annotations learned from the Kubernetes cluster.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policy definition and enforcement

Application workspaces are objects for defining, analyzing, and enforcing policies for a particular application.  BorgPress contains a total of 6 virtual machines, 3 containers, and 15 IP addresses.

Scopes are used to determine the set of endpoints that are pulled into the application workspace and thus are affected by the created policies that are later enforced.

In the example below, a scope, BorgPress, is created that identifies any endpoint that matches the four defined queries.  The queries for the BorgPress scope are based on custom annotations that have been both manually uploaded and learned dynamically.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Once a scope is created, the application workspace is built and associated to the scope.  In the example below, a BorgPress application workspace is created and tied to the BorgPress scope.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Policies using prebuilt filters inside the application workspace are defined to build segmentation rules.  In the example below, five default policies have been built that define the set of rules for BorgPress to function based on the logical application diagram discussed earlier. The orange boxes are with a red border are filters that describe the BorgPress wordpress tier that abstracts or contains container endpoints.  The highlighted yellow box shows a single rule that allows any BorgPress database server (there are three virtual machine endpoints in this tier) to provide a service on port 3306 to the consumer –  which is a BorgPress database HAProxy server.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

To validate these policies, live policy analysis is used to cross-examine every packet of a flow against the five policies or intents and then classify each as permitted, rejected, escaped, or misdropped by the network.  This is performed in near-real time and for all endpoints of the BorgPress application.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

It’s important to point out that up to this point there is no actual enforcement of policies.  Traffic classification is just a record of what occurred on the network as it relates to the intentions of the policy you would like to enforce.  This allows you to be certain that the rules you ultimately enforce will work as intended.  Through a single click of a button, Cisco Tetration can provide holistic enforcement for BorgPress across both virtual and containerized workloads.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Every rule does not need to be implemented on every endpoint.  Once “Enforce Policies” is enabled, each endpoint, through a secure channel to the agent, receives only its required set of rules.  The agent leverages the native firewall on the host OS (iptables or Windows firewall) to translate and implement policies.

The set of rules can be viewed from within the Cisco Tetration user interface or directly from the endpoint.  In the example below, the rules received and enforced for the BorgPress database endpoint db-mysql01, a virtual machine, are shown.  The rules match exactly the policy built inside the application workspace and are translated into the correct IPs on the endpoint using iptables.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Now that we’ve seen the rules enforced in a virtual machine for BorgPress, let’s look at how enforcement is done on containers.  Enforcement for containers happens at the container namespace level. Since BorgPress is a Kubernetes deployment, enforcement happens at the pod level.  BorgPress has three wordpress pods running in the default namespace.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

Just as with virtual machines, we can view the enforcement rules both using the Cisco Tetration user interface or on the endpoint.  In the example below, the user interface is showing the host profile of one of the Kubernetes worker nodes: k8s-node02.  With container support, a new tab next to the Enforcement tab (“Container Enforcement”) shows the list of rules enforced to each pod.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

At this point all endpoints, both virtual and container, have the necessary enforcement rules, and BorgPress is almost deployed with a zero-trust security model.  Earlier I discussed the use of a type of Kubernetes service object called a NodePort.  Its purpose is to expose the BorgPress wordpress service to external (outside the cluster) users.  As the logical application flow diagram illustrates, the Web-HAProxy receives incoming client requests and load-balances them to the NodePort that every Kubernetes worker node listens on.  Since the NodePort is a dynamically generated high-end port number, it can change over time.  This presents a problem.  To make sure the Web-Haproxy always has the correct rule to allow outgoing traffic to the NodePort, Cisco Tetration learns about the NodePort though the external orchestrator.  When policy is pushed to the Web-HAProxy, Cisco Tetration also pushes the correct rule to allow traffic to the NodePort.  If you noticed from the application workspace image earlier, there is no policy definition or rule for the NodePort 30000 to allow communication from Web-HAProxy to BP-Web-Tier.  However, looking at the iptables of Web-HAProxy (see figure below), you can see that Cisco Tetration correctly added a rule to allow outgoing traffic to port 30000.

Cisco Tutorials and Materials, Cisco Guides, Cisco Study Materials, Cisco Learning, Cisco Tips

The Importance of an Information Security Strategy in Mergers and Acquisitions

Cisco Security, Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications

Organizations have many options when it comes to growing. Many grow by hiring additional staff when it comes time to expand. Others grow through mergers and acquisitions with related companies, or companies that represent an entryway into a desired new vertical or territory. Organizations that engage in M&A should include an information security strategy as part of the process.

Headlines in 2018 include several data breaches where the acquired company led to an incident for the acquirer. A large travel site reported a data breach of information on 880,000 payment cards  in March of 2018. The attack was believed to compromise systems months earlier. The investigation determined that the incident was potentially linked to legacy IT systems from an acquired company. Failure to update or integrate these systems left the parent company potentially vulnerable.

A Baltimore-based apparel manufacturer reported a data breach affecting customers who leverage the company’s sports tracking app. 150 million customer records associated with the app were compromised. The app creator was acquired by the parent company in 2015. Compromised data includes usernames, passwords and email addresses.

Companies with an acquisition strategy need to include information security in the M&A process. Many security tools can be leveraged to provide visibility into an organization’s network, users and information. These visibility tools should be used to determine the accessibility of information to both appropriate personnel and unauthorized parties. Understanding the vulnerabilities, network segmentation, access to assets and information, and asset lifecycle management are important negotiation metrics.

The acquiring company should be able to run visibility or vulnerability assessments of the target company as part of the negotiation. Vulnerability scanners help gather risk data. NetFlow and network traffic metadata tools provide visibility into the scope and nature of an organization’s traffic. This can help an organization identify and inventory assets. Visibility into web traffic, DNS queries, and applications in use all contribute to a view of an organization.

Cisco Security, Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications
Vulnerable software report from AMP for Endpoints

These tools can help to establish where the target company is in terms of risk mitigation and security posture. It can tell the acquiring company how many man hours will need to go to get the target company to the appropriate levels of risk. An intelligent organization’s leadership understands that security is essential to all parts of the network. Proactive planning for growth and development must also be part of that security strategy.

Incident Response teams often use security tools to provide visibility into an organization following a data breach. These same tools can provide visibility into a target company’s information systems and networks. Use of these tools in advance of an acquisition can provide insight into the projects, security awareness training and even culture change necessary to understand the role of security in modern IT. Implementation of non-disclosure agreements can protect both the acquiring company and the target from leaks due to any gaps in the organization’s security posture.

Legacy systems have led to organizations appearing in the headlines. The brand damage, class action lawsuit payouts, data breach notifications and payment for services such as identity theft are all avoidable. Introducing and executing on a strong information security strategy as part of the M&A process is one way for organizations to minimize risk exposure and to understand the challenges and steps to achieving their desired security posture.

Leaders in organizations are accountable for the risk and exposure of users, information and networks. Visibility into these facets of an organization are key to ongoing security and to informed expansion, including mergers and acquisitions. The call to action for these organizational leaders focuses on that visibility. Research visibility, traffic profiling, application discovery and vulnerability tools. Speak with the organization’s trusted advisors, both internal and external, about the tools available and their recommendations. Regularly speak with the organization’s business leaders about emerging markets and potential mergers. Create and maintain an open dialogue about the potential risks and exposures that come with M&A. Many business leaders understand the importance of security in day-to-day operations. Including potential future business expansion in that conversation will help to craft a strategic information security policy.

Sunday, 20 May 2018

How Cloud Native and Container Platforms Change the Way We Think about Networking

Networking has been a foundational component of our economy since the Internet days. In the early days, defining protocols and standards for how to connect, route, and interoperate local, metro, and wide area networks was critical to the businesses strategy. The computer networking exports with their TCP/IP computer centric view of the stack went head to head with the telecommunications giants and their more traditional telephony driven model of switching, FCAPS (Fault, Configurations, Accounting, Performance, and Security management), and OSI Model. As is often the case in these debates, both sides have good architecture and design principles and in the end, while TCP/IP one the war, very important concepts were adopted into the network model to account for Quality of Service (QoS), traffic engineering, segmentation of network traffic for control and data plane, and hierarchy of the network Simply stated, a flat network with no segmentation or hierarchy from the network stack on the computer though the Internet would cause fault, configuration performance, and security issues. This was understood largely by all in the industry and IT.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

TCP/IP Model Versus OSI Model


In the past several years as the need to agility and driving time to market down, there has been a major mind shift that I have noticed in the largest of companies. Networking, with all its complexity in configuration, routing and segmentation, was causing major delays in delivering the true speed and agility required by the business. There have been several attempts to address these using technologies like VPN, NAT, and predefined network blocks per deployment region. These solutions are static in nature and the speed at which technologies mature and innovation happens, these solutions were very limiting. In the industry, we have looked at automating and orchestrating the network parameters with an API and called this Software Defined Networking.  SDN is definitely a step in the right direction; however it geared towards Network Administrators and not business owners or developers. Wikipedia has a good explanation of SDN and this simple diagram:

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

SDN Overview


While SDN is a great step forward – programmable APIs to the network, this is way too complex for the business owner or developer to program against (not in developer language and not written in a developer model) and most importantly, it does not solve the issues of complexity and ease of configuration in a rapidly changing and software centric world.

As cloud computing use increased in the public cloud, the abstraction of the underlying network became an important driver in adoption. No longer is defining or understanding the network important. As the industry moves to containers, the desire to simply and flatten the network is rapidly becoming the new standard for cloud native and container networking, orchestration, and microservices architecture.

While this may appear to be the right direction to take the least path of resistance, the network matters more today than it ever has before. Why you ask, let’s look at the tradition application architecture.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

N-Tiered Application Model


In this model, you have the following build in networking parameters:

◈ Presentation – On Isolated Network with NAT/PAT, firewall, and Logical separation of traffic onto a VLAN that is for just web traffic. The control (routing) traffic is on a separate network from the web (data) traffic
◈ Logic – On a separated network isolated behind a firewall on a separate VLAN than the web traffic. The control (routing) traffic is on a separate network from the application logic (data) traffic
◈ Database – On a separate isolated network behind a firewall on a separate VLAN than the web and application logic traffic. The control (routing) traffic is on a separate network from the database (data) traffic

Now let’s compare this to the cloud native architecture:

Cisco Tutorials and Materials, Cisco Certifications, Cisco Learning, Cisco Networking

Cloud Native Application Model


In this new architecture, all traffic is on a common network with no isolation, no segmentation of control and data traffic, and leveraging the Linux kernel networking stack – which is another blog on why that is a very bad idea if you care about performance and scale. With the move to APIs and Rest interface, there is an added level of very chatty API traffic running over the very same network. All the complexity of the application architecture is handled by the network which makes the network more critical today than ever before.

Now, I’m not saying, it’s all about the network. As with all things in life, balance and focusing on what matters most is the best path to take (although it’s the path less traveled)  What I propose is that the intelligence can be built into the network. I like the analogy that everyone uses for cloud native with Pets versus Cattle. My network analogy is that cattle need isolation, direction, and fences! How can we as an industry move to more agile cloud native architecture and still corral the cattle? The answer comes from looking at this from 2 separate but equally important perspectives.

From a top down (application developer) perspective, the network requirements need to be represented as business intent with constraints that the business understands like latency, priority, security, and performance. If we enable a simple definition that is focused on the application’s business objectives, that will make it easy for the business to define what they care about.

From a bottom up (network administrator) perspective, the network administrators understand how to address business objectives and can easily programmatically define network and network security policies to meet the requirements. This will requires extending SDN capabilities to understand application policy and network specification to be created to support cloud native architectures, but these patterns are well known in the networking world today. The next step will be to use data generated by the application, services, and components to enable analytics to address performance, security, reliability, and latency issues in real or almost real time.