Thursday 7 May 2020

What’s new and exciting on Cisco ACI with Red Hat Ansible Collections

Introduction


As customers embrace the DevOps model to accelerate application deployment and achieve higher efficiency in operating their data centers, the infrastructure needs to change and respond faster than ever to business needs. DevOps can help you achieve an agile operational model by improving on automation, innovation, and consistency.  In this blog let us go on a quick journey of how Red Hat Ansible and Cisco ACI helps you address these challenges quickly and proficiently.

Ansible and Cisco ACI – The perfect pair that enables a true DevOps model


In many customer IT environments, network operations still remain entrenched in error-prone manual processes. Many of the earlier generation folks that were attracted to network operations didn’t want to be programmers, rather they were more interested in implementing and maintaining network policies using CLI and monolithic means on proprietary platforms. In recent times, best-practices in Server-side and DevOps practices have started influencing the networking world with Cloud Administrators forced to support both the compute and network resources. However, in many cases, entirely moving away from traditional network operations may not be possible, just as a 100% DevOps strategy may not be a good fit. The best strategy: The most with the least amount of change or energy. Automation is the natural solution here – it can make the most unproductive and repetitive tasks ideal candidates for automation.

Red Hat Ansible has fast emerged as one of the most popular platforms to automate these day-to-day manual tasks and bring unprecedented cost savings and operational efficiency. Cisco ACI’s Application Policy Infrastructure Controller (APIC) supports a robust and open API that Ansible can seamlessly leverage. Ansible is open source, works with many different operating systems that run on Cisco Networking platforms (ACI, IOS, NX-OS, IOS-XR), and supports the range of ACI offerings.

Together, Cisco ACI and Ansible provide a perfect combination enabling customers to embrace the DevOps model and accelerate ACI Deployment, Monitoring, day-to-day management, and more.

Cisco ACI – Red Hat Ansible solution


Ansible is the only solution in the market today to address network automation challenges, with its unified configuration, provisioning and application deployment, and creating favorable business outcomes like accelerated DevOps and a simplified IT environment.

Ansible brings lots of synergies to an ACI environment with its simple automation language, powerful features such as app-deployment, configuration. management and workflow orchestration and above all an agentless architecture that makes the execution environment predictable and secure.

In the latest Ansible release (2.9), there are over 100 ACI and Multisite modules in Ansible core. Modules for specific objects like, Tenant and Application Profiles as well as a module for interacting directly with the ACI REST API. This means that a broad set of ACI functionality is available as soon as you install Ansible. After installing Ansible only two things are required to start automating an ACI Network Fabric. First, an Ansible playbook, which is a set of automation instructions and two, the inventory file which lists the devices to be automated in this case an APIC. The playbooks are written in YAML to define the tasks to execute against an ACI fabric. Here is an ACI playbook sample that configures a Tenant on an APIC.

---

- name: ACI Tenant Management

  hosts: aci

  connection: local

  gather facts: no

  tasks:

  - name: CONFIGURE TENANT

    aci_tenant:

      hostname: "{{ hostname }}"

      username: admin

      password: adminpass

      validate_certs: false

      tenant: "{{ tenant_name }}"

      description: "{{ tenant_name }} created Using Ansible"

      state: present

How Ansible-ACI integration works?


The picture below represents users creating inventory files (for the APICs we want Ansible to manage), creating the playbooks (what tasks we want to run/automate on the target systems – the APICs), and leverage the available ACI modules for the tasks you want to configure/automate. Ansible then pushes those configuration tasks via the APIC REST API through HTTPS to the target system, the APIC.

Cisco ACI, Cisco Study Material, Cisco Learning, Cisco Certification, Cisco Exam Prep

The ACI Ansible modules help cover a broad set of Data center use cases. These include,

◉ Day 0 – Initial installation and deployment – Configuration of universal entities and policies, for example switch registration, naming, user configuration and firmware update.

◉ Day 1 – Configuration and Operation – Initial Tenant creation, along with all the Tenant child configurations, for example VRF, AP, BDs, EPGs, etc.

◉ Day 2 – Additional Configuration and Optimization – Add/Update/Remove Policies, Tenants, Applications, for example add a contract to support a new protocol in an existing EPG.

Key Benefits of ACI-Ansible solution


◉ Enables Admins to align on a unified approach to managing ACI the same way they manage other Data Center and Cloud infrastructure.

◉ ACI Ansible modules provide broad coverage for many ACI objects

◉ ACI Ansible modules are idempotent ensuring that playbook results are always the same

◉ ACI Ansible modules extend the trusted secure interaction of the ACI CLI and GUI.

◉ No Programming Skills required with Ansible module.

Wednesday 6 May 2020

Expanding the Internet for the Future Supporting First Responders and Society at Large

As social distancing measures continue, daily necessities such as maintaining a livelihood, accessing education, or obtaining critical services are being forced online. My wife and I are seeing this unfold personally as we work from home and attempt to help our 7- and 13-year-old navigate distance learning.

In our “new normal,” our consumption of online services is growing. Internet access is becoming increasingly vital to our health, safety, economic, and societal survival. And it’s not just us. Heroes and first responders, hospitals, schools, governments, workers, businesses, and our society-at-large are relying on the internet more than ever.

The more our society remains apart, the more we all need to be connected.

Service Providers Play an Important Role


With more people working from home, more children distance learning, and more parents seeking to keep their families entertained, global internet traffic has reached a new threshold. At Cisco, we’re seeing this firsthand.

Following stay-at-home mandates, traffic at major public peering exchanges increased 24% in Asia-Pacific, 20% in Europe, and 18.5% in the Americas. Here is a more specific breakdown by country:

Cisco Tutorial and Material, Cisco Learning, Cisco Exam Prep, Cisco Guides

Our service provider customers and partners have been doing a great job to manage the spikes in network traffic and load balance the shift in ‘peak’ online hours accordingly. They are vital to helping people stay safe and healthy, keeping them connected to their families, providing them access to important services, and supporting their jobs and education.

Service Provider Roundtable


Earlier this week, I hosted a virtual press and industry analyst roundtable with some leading providers of connectivity, social networking, and telehealth services.  The panel included:

◉ Jason Porter, SVP, AT&T FirstNet

◉ Kevin Hart, EVP/ Chief Product and Technology Officer, Cox Communications

◉ Dan Rabinovitsj, VP Connectivity, Facebook

◉ Andrés Irlando, SVP/President, Public Sector and Verizon Connect at Verizon

◉ Todd Leach, VP/CIO University of Texas, Galveston Medical Branch

◉ Mike King, MS, CHCIO Director University of Texas, Galveston Medical Branch

During the one-hour event, we explored how these big companies are supporting healthcare providers and first responders during this global pandemic. We also talked about critical infrastructure and how it’s driving changes in tele-health developed by the University of Texas, Galveston. Here are a few highlights from our panelists as they shared what’s happening on their networks:

Todd Leach, University of Texas Galveston Medical Branch: “We were dealing with critical patients while caring for the rest of the population. We had to scramble pretty quickly to transition over to telehealth. I can’t imagine what we would have done without having this technology.”

Kevin Hart, Cox: “Over the last two months, we’ve had a 15%-20% increase in traffic to our downstream network, and a 35%-40% increase in our upstream traffic… The peak usage window has moved from 9:00 p.m. on weekends to 2:00 – 3:00 p.m. during the weekday.”

Dan Rabinovitsj, Facebook: “People use our platform to stay connected. Messaging on all of our platforms is up 50%. In some of our markets, we’ve seen 1000% increases in video calling, video messaging—unprecedented usage.”

Jason Porter, AT&T FirstNet: “COVID was the perfect test case for our response, and we proved a nation-wide public/private network was there for first-responders the whole way.”

Andres Irlando, Verizon Connect at Verizon: “It’s the first time we activated our Verizon emergency response team across the country, everything from mobile testing sites, to pop-up hospitals, emergency operations centers, quarantine sites… you name it. By and large, the macro network has performed very well during this crisis.”

Digital Divide


As the importance of the internet shifts from huge to massive, the pandemic is shining a spotlight on the realities of the digital divide—we’re seeing large gaps between developed and developing countries, as well as urban and rural areas, for example.

Despite the growing transition to digital and remote services, 3.8 billion people around the world still remain unconnected and underserved with lack of critical access to information, healthcare and education.

At Cisco, we believe connectivity is critical to create a society and economy in which all citizens can participate and thrive.

◉ Only 35% of the population in developing countries has internet access, versus 80% in advanced economies.

◉ Bringing the internet to those currently without it would lift 500 million people out of poverty and add $6.7 trillion to the global economy.

◉ Approximately 23% of adults internationally do not know how to use the internet.

In these challenging times, the internet is more critical than ever. Businesses, governments, and institutions realize the need to invest in the networks connecting them to their customers, constituents, patients, and students. For some, that may require increased funding, government incentives, and cooperation across industries.

As we discussed on the panel, we all believe it will take the work of new and ongoing partnerships with strong commitment to make the internet more ubiquitous. As Dan at Facebook said, “No one company can do this alone.” And as Todd at UTMB put it best, “Just because it is hard, doesn’t mean we shouldn’t do it.” We are all in.

Source: cisco.com

Tuesday 5 May 2020

Cisco’s AI/ML can make your Wi-Fi 6 upgrade a success

Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep

Upgrading to Wi-Fi 6 is not just about replacing your oldest access points. The true value proposition is in locating areas where specific Wi-Fi 6 features will improve the network performance and user experience. The AI/ML capabilities in Cisco DNA Center can help you find these upgrade opportunities.

Wi-Fi 6 has some new features that are useful in resolving what used to be unsurmountable problem areas in a wireless network. The first step is to understand these new Wi-Fi 6 features and the wireless challenges that they resolve.

As you are sitting at home reading this, you could be analyzing your campus wireless network for areas where Wi-Fi 6 can add the most bang for your buck. Wi-Fi 6 has some new features that are useful in resolving what used to be unsurmountable problem areas in a wireless network. Your Cisco DNA Center Assurance dashboard has AI/ML features that can allow you to find these areas!

The first step is to understand these new Wi-Fi 6 features and the wireless challenges that they resolve:

Poor performance in highly congested areas: OFDMA in Wi-Fi 6, allows multiple clients to transmit simultaneously in order to increase capacity in highly congested areas.

Poor uplink performance on mobile devices: Uplink sub-channelization in Wi-Fi 6 provides mobile devices greater radio transmit power without consuming more battery power. This provides mobile devices better Wi-Fi performance in challenging conditions.

High radio interference: The Wi-Fi 6 OFDMA uplink map creates a synchronization that leads to less interference in between clients and in between access points. Additionally, OFDMA allows clients to transmit on small channels at greater power making them much less susceptible to interference from other wireless devices.

The IoT small packet problem: IT teams with large concentration of IoT devices (manufacturing, process control, video surveillance, etc.) are very familiar with the packet processing bottleneck that access points can become. Modern Wi-Fi 6 chipsets solve this with powerful quad-core 2.2GHz processors that can process three times more packets than most 802.11ac access points and twelve times as much as most 802.11n access points. This processing power, combined with a well-designed access point data-forwarding mechanism, has the potential to eliminate most of the issues you used to have supporting IoT devices.

Now let’s look at how you can use the AI/ML in Cisco DNA Center to quickly locate areas in your campus network that fit these challenging conditions.

Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep

Congested areas


Any simple network management system with wireless heat maps can show you areas of high congestion. But even older 802.11ac/Wi-Fi 5 (with multi-user MIMO) can handle most congested areas quite well. To get the best bang for our Wi-Fi buck, we only want to upgrade those areas where this congestion is affecting the performance and user experience. The Assurance section in Cisco DNA Center has an area called “Trends and Insights” where you can use AI/ML to compare just about anything on your campus network. You can compare the wireless performance in your buildings, between floors, or even compare every single access point on campus. The graphic above shows channel utilization of 2,216 access points from greatest to lowest. The access points in dark red are using very high percentages of the wireless medium to keep up with demand. You can then view the packet failure rate on those highly utilized access points. This will quickly tell you which access points have (1) high utilization AND (2) high retransmission rates. Upgrading these access points to Wi-Fi 6 is a good investment. –Note that, depending on when you are reading this, you want to select to go back in time a few months to when your campus wireless network traffic was normal. February is a good month because it is after the winter holiday and before spring break.

Areas where mobile devices struggle  


In order to minimize battery consumption, mobile device Wi-Fi radios transmit at much lower power (15mW typical) than the transmit power for access points (100mW or more). Because of this, mobile devices often struggle to send data (uplink) even though the mobile device Wi-Fi signal strength indicator shows full power. This happens because the mobile device measures how it is receiving signal from the access point (downlink). This problem is often worse in certain areas of the campus because building materials vary and things like concrete and metal exacerbate this uplink weakness.  OFDMA in Wi-Fi 6 allows a mobile device to concentrate its transmission (the uplink) on a smaller radio channel for higher power. If that didn’t make sense, imagine how the nozzle on your garden hose concentrates the flow of water to give it more power. The result for Wi-Fi 6 is the ability of a low power device to transmit with much greater uplink signal quality, which can help penetrate (or bounce around) heavy walls and other obstacles. So how can you detect areas on campus where Wi-Fi clients are experiencing low-quality uplink?

Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep

Go back to the AI/ML Trends and Insights and compare average client RSSI (Received Signal Strength Indicator) across all access point on your campus. This will tell you how each access point is receiving signal from the wireless clients. Access points with low averages should be selected for a Wi-Fi 6 upgrade.

Areas of high interference


Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep
Interference is a difficult problem to diagnose in wireless networks because the symptoms of interference can vary. Users can experience long onboarding times, slow app performance, and difficulty connecting to the cloud. The good news is that the AI Network Analytics feature in Cisco DNA Center will automatically identify interference and alert you on the “Top 10 Issues” window, right on the front page of the dashboard.

So, if you have seen these alerts on your home screen, it would be a good idea to see if Wi-Fi 6 can help mitigate this interference. If you go to the AI/ML “Trends and Insights” menu you can sort access points based on levels of interference. This can give you a list of your worst offenders. Click on one of the access points and look for the “Intelligent Capture” tool at the top of the window. This tool uses your network access points to perform complex packet, frame, and spectrum analyses.

Inside of the Intelligent Capture window, click on spectrum analysis and watch as the software begins to monitor the wireless traffic for interference severity and duty cycle. The waves show you the channels where the interference is located and how this is affecting the duty cycle of that particular access point. This is a very comprehensive test that will scan all of the available wireless channels with traffic from your actual network at that location.

Intelligent Capture lets you drill down on this and identify the percentage of channel utilization for this access point, other access points, and even non-Wi-Fi interference. The image to the right is a screen capture from the output of a spectrum analysis at 2.4 GHz (I cut the screen to be able to enlarge the image). Channels 1 and 2 have high levels of interference but channels 3 and 4 do not. If you find that interference is limited to one or two of the Wi-Fi channels, you can configure your access point to operate outside of these channels. However, if the interference is running across all channels you have a great candidate for a Wi-Fi 6 upgrade. The OFDMA synchronization in Wi-Fi 6 will greatly minimize any self-interference (interference between your own network devices and access points), and your Wi-Fi 6 clients will be able to transmit on a more narrow, more powerful radio channel giving them added robustness against internal or external interference.

A mere 20 Mbps of M2M data can take almost half of your access point’s capacity!

The IoT small packet problem


IT teams that operate networks for manufacturing, process control, mining, and digital cities are quite familiar with the IoT small packet problem. It has long been a thorn in the side of Wi-Fi networks used for machine-to-machine (M2M) connectivity and video surveillance. The issue is that these types of communication use small payloads of data in high frequency. Most forms of M2M encapsulate their data in 64-Byte UDP packets, while most normal IP file transfers use larger 1,500-Byte packets. A Wi-Fi access point is limited in the number of packets per second (PPS) that the imbedded chipset can process.  Imagine a Wi-Fi chipset capable of processing 30,000 PPS. For normal 1,500-Byte data packets, this device is capable of transferring 360 Mbps (30,000*1500*8). But, for 64-Byte packets the maximum throughput drops to only 45 Mbps. More importantly, 20 Mbps of M2M data can take almost half of my access point’s capacity!

Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep

Cisco Wireless, Cisco DNA Center, IOT, Wi-Fi, Networking, Cisco Tutorial and Material, Cisco Exam Prep

To find small packet problem areas in your campus network, begin by looking at the AI/ML “Trends and Insights” menu and sort access points based on “Traffic.” This will single out the busiest access points based on packet transfers. Like before, use the Intelligent Capture feature, but this time look at the frame counts and frame errors window (shown at left). Any access points with lots of traffic, high frame counts and high frame errors are great candidates for a Wi-Fi 6 upgrade.In the past Cisco has done many enhancements to overcome the limitations of typical Wi-Fi chipsets, like HDX and “Turbo Performance” in the Cisco Aironet 2700 and 3700 series access points for 802.11ac. This HDX technology along with the quad-core processors now available in new Wi-Fi 6 chipsets take packet capacity to a whole new level, and you can see this in the Cisco Catalyst 9100 access points and Cisco Meraki Wi-Fi 6 Access Points.

My goal with this blog was to show you the power of AI/ML in Cisco DNA Center and how it can locate some of the less obvious, but more critical opportunities for upgrading to Wi-Fi 6. The material may be a bit more technical than most of our blogs here at Cisco, so please feel free to comment below with any questions you may have.

Cisco DNA Assurance and AI Network Analytics are included in the Cisco DNA Advantage software.

Monday 4 May 2020

Extending Effective Security without Adding Complexity

Security solutions often need to walk a very fine line. On one side, they must provide visibility and the capability of enforcing policy. On the other side, they cannot be so complex to administer, maintain and configure that they are not adopted or are set up in ways that are confusing or low value. At Cisco, we’ve intentionally designed, developed and acquired security solutions to be high value without being overly complex.

Security administrators are already overwhelmed with the sheer number of tools that they have. Organizations are moving to vendor consolidation, but still deploy many tools. The Cisco 2019 CISO Benchmark Study reports that 79% of those surveyed claimed it was “somewhat or very challenging to orchestrate alerts from multiple vendor products.”

Cisco Partners, Cisco Security, Cisco Prep, Cisco Learning, Cisco Exam Prep

Figure 1. The Security Effectiveness Gap

The adoption of multiple security solutions with incremental new capabilities but high degrees of complexity results in the security effectiveness gap. Organizations invest great amounts of time, money and effort for only marginal benefits. If a tool is difficult to install and only provides a small number of benefits, these investments can be costly.

Cisco Partners, Cisco Security, Cisco Prep, Cisco Learning, Cisco Exam Prep

Figure 2. Incremental Complexity with Exponential Banefits

Alternatively, solutions should be simple to deploy but offer expanded capabilities. The goal is for incremental complexity and exponential benefits. To this end, Cisco has made substantial investments in security solutions that organizations can deploy easily. Two examples of this are Umbrella and Duo.

Cisco Umbrella offers flexible, cloud-delivered security. It combines multiple security functions into one solution, so security teams can extend protection to devices, remote users, and distributed locations anywhere.

Duo is designed to verify the identity of all users with effective, strong authentication (two-factor authentication) before granting access to corporate applications and resources. It provides visibility into every device used to gain access to corporate applications, whether that device is corporate managed or not.

Cisco Partners, Cisco Security, Cisco Prep, Cisco Learning, Cisco Exam Prep

Both Umbrella and Duo can be deployed in minutes and provide visibility and protection for remote users, whether they are leveraging a VPN or not. The goal is to keep security simple while organizations and administrators handle a previously never seen set of challenges.

To help overcome these challenges, Cisco has enabled our trusted partners to manage these trials for customers around the world. Partners have tools in place to help customers initiate these trials, to extend them when necessary, and to help seamlessly move from trials to production. These skills and tools provide smooth management and transition, allowing customer administrators to focus on keeping the business running and productive.

The increase in work from home initiatives introduces some issues for administrators. Ensuring that employees can be productive is its own challenge. Organizations may have to open up corporate networks and assets in ways they never predicted. Cisco’s offerings provide the confidence that this access is granted, without sacrificing the visibility and control needed to secure those devices.

Sunday 3 May 2020

Cisco Secure Cloud Architecture for AWS

More and more customers are deploying workloads and applications in Amazon Web Service (AWS). AWS provides a flexible, reliable, secure, easy to use, scalable and high-performance environment for workloads and applications.

AWS recommends three-tier architecture for web applications. These tiers are separated to perform various functions independently. Multilayer architecture for web applications has a presentation layer (web tier), an application layer (app tier), and a database layer (database tier). There is the flexibility to make changes to each tier independent of another tier. The application requires scalability and availability; the three-tier architecture makes scalability and availability for each tier independent.

Amazon Web Services, AMP for Endpoints, AWS, Cisco Security, Cisco Stealthwatch Cloud, Cisco Exam Prep

Figure 1: AWS three-tier architecture

AWS has a shared security model i.e., the customers are still responsible for protecting workloads, applications, and data. The above three-tiered architecture offers scalable and highly available design. Each tier can scale-in or scale-out independently, but Cisco recommends using proper security controls for visibility, segmentation, and threat protection.

Amazon Web Services, AMP for Endpoints, AWS, Cisco Security, Cisco Stealthwatch Cloud, Cisco Exam Prep

Figure 2: Key pillars of a successful security architecture

Cisco recommends protecting workload and application in AWS using a Cisco Validated Design (CVD) shown in Figure 3. All the components mentioned in this design have been verified and tested in the AWS cloud. This design brings together Cisco and AWS security controls to provide visibility, segmentation, and threat protection.

Visibility: Cisco Tetration, Cisco Stealthwatch Cloud, Cisco AMP for Endpoint, Cisco Threat Response, and AWS VPC flow logs.

Segmentation: Cisco Next-Generation Firewall, Cisco Adaptive Security Appliance, Cisco Tetration, Cisco Defense Orchestrator, AWS security group, AWS gateway, AWS VPC, and AWS subnets.

Threat Protection: Cisco Next-Generation Firewall (NGFWv), Cisco Tetration, Cisco AMP for Endpoints, Cisco Umbrella, Cisco Threat Response, AWS WAF, AWS Shield (DDoS – Basic or Advance), and Radware WAF/DDoS.

Another key pillar is Identity and Access Management (IAM): Cisco Duo and AWS IAM

Amazon Web Services, AMP for Endpoints, AWS, Cisco Security, Cisco Stealthwatch Cloud, Cisco Exam Prep

Figure 3: Cisco Validated Design for AWS three-tier architecture

Cisco security controls used in the validated design (Figure 3):

◉ Cisco Defense Orchestrator (CDO) – CDO can now manage the AWS security group. CDO provides micro-segmentation capability by managing firewall hosts on the workload.

◉ Cisco Tetration (SaaS) – Cisco Tetration agent on AWS instances forwards “network flow and process information” this information essential for getting visibility and policy enforcement.

◉ Cisco Stealthwatch Cloud (SWC) – SWC consumes VPC flow logs, cloud trail, AWS Inspector, AWS IAM, and many more. SWC includes compliance-related observations and it provides visibility into your AWS cloud infrastructure

◉ Cisco Duo – Cisco Duo provides MFA service for AWS console and applications running on the workloads

◉ Cisco Umbrella – Cisco Umbrella virtual appliance is available for AWS, using DHCP options administrator can configure Cisco Umbrella as a primary DNS. Cisco Umbrella cloud provides a way to configure and enforce DNS layer security to workloads in the cloud.

◉ Cisco Adaptative Security Appliance Virtual (ASAv): Cisco ASAv provides a stateful firewall, network segmentation, and VPN capabilities in AWS VPC.

◉ Cisco Next-Generation Firewall Virtual (NGFWv): Cisco NGFWv provides capabilities like stateful firewall, “application visibility and control”, next-generation IPS, URL-filtering, and network AMP in AWS.

◉ Cisco Threat Response (CTR): Cisco Threat Response has API driven integration with Umbrella, AMP for Endpoints, and SWC (coming soon). Using this integration security ops team can get visibility and perform threat hunting.

AWS controls used in the Cisco Validated Design (Figure 3):

◉ AWS Security Groups (SG) – AWS security groups provide micro-segmentation capability by adding firewalls rules directly on the instance virtual interface (elastic network interface – eni).

◉ AWS Web Application Firewall (WAF) – AWS WAF protects against web exploits.

◉ AWS Shield (DDoS) – AWS Shield protects against DDoS.

◉ AWS Application Load Balancer (ALB) and Network Load Balancer (NLB) – AWS ALB and NLB provides load balancing for incoming traffic.

◉ AWS route 53 – AWS Route53 provides DNS based load balancing and used for load balancing RAVPN (SSL) across multiple firewalls in a VPC. 

Radware controls used in the Cisco Validated Design (Figure 3):

◉ Radware (WAF and DDoS): Radware provides WAF and DDoS capabilities as a service.

Cisco recommends enabling the following key capabilities on Cisco security controls. These controls not only provide unmatched visibility, segmentation and threat protection, but they also help in adhering to security compliance.

Amazon Web Services, AMP for Endpoints, AWS, Cisco Security, Cisco Stealthwatch Cloud, Cisco Exam Prep

In addition to the above Cisco security control, Cisco recommends using the following native AWS security components to protect workloads and applications.

Amazon Web Services, AMP for Endpoints, AWS, Cisco Security, Cisco Stealthwatch Cloud, Cisco Exam Prep

Friday 1 May 2020

Creating More Agile and Upgradable Networks with a Controller-Based Architecture

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Networking

All enterprises are in a constant state of digital flux, striving to keep existing business processes running efficiently while pushing to build new applications to satisfy customer and B2B requirements. Underlying these initiatives is the enterprise network, connecting employees and applications to the world of customers and business partners. From the data center to wired and wireless campus offices to distributed branch sites and cloud applications, the network unifies communications, collaboration, and commerce.

But the network too is in the throes of digital flux. The network infrastructure devices—switches, routers, wireless access points (APs)—are frequently in need of upgrades to add new capabilities and software fixes and apply security patches to protect against new threats. In other cases, where a business requires extremely high availability—such as a securities trading nexus—upgrades to the network infrastructure may be few and far between and only high-priority security patches are applied over the span of years. In addition, different divisions of the enterprise may require variants of the core network operating system (NOS) in order to fine-tune performance for different business operations, mandated uptime, and traffic types.

Keeping the constellations of routers, switches, and access points up to date with the latest versions and variants of the NOS and security patches is a monumental task for budget-constrained IT organizations. The traditional upgrade path is an extremely manual process: from finding the correct gold version of NOS image, downloading it, testing on each platform, installing on each component, and manually comparing the previous network status to the upgraded state—and potentially rolling back the upgrade in case of unexpected issues. One. Box. At. A. Time. The process is often so complex that IT dedicates months to evaluate and test an upgrade before deploying it. Meanwhile business needs may go unmet as changes to the network are frozen, awaiting new capabilities from an upgrade.

As organizations seek to rapidly adopt new technologies and launch digital transformation projects—IoT, edge computing, mobile applications—and prepare for Wi-Fi 6 and 5G traffic increases, the network must be able to change frequently and on-demand to keep up with business needs. The old ways of manually managing software for thousands of network components simply will not suffice to keep the enterprise competitive.

In this post, we will take a much deeper dive into the benefits of controller-based networking.

What is a Controller-Based Network?


Cisco introduced the idea of Intent-Based Networking as a software-defined architecture that interprets business requirements—intents—and translates them into network actions such as segmentation, security policies, and device onboarding. Controllers are key to bringing purposeful intelligence into an Intent-Based Network. Controllers act as intermediaries between human operators specifying intents, and all the switches, routers, and access points that provide the required connectivity.

Controllers, such as Cisco DNA Center and Cisco vManage, translate intents into configurations and policies that are downloaded to network infrastructure devices—switches, routers, access points—that provide the connectivity to computers, mobile devices, and applications. Controllers provide visibility into the network by actively monitoring network nodes, analyzing telemetry, latency, Quality of Service levels, and error data in real time, and reporting statistics, alerts, and anomalies to IT managers. In turn, this provides insights into how the network has been functioning, along with its current state to ensure that the intents are being accomplished. Insights into network history and current states play a critical role in managing and automating the maintenance and upgrade processes.

Network Process Automation at Scale


With thousands of switches, routers, and APs requiring management, automating as many network processes as possible reduces the workload of IT as well as the chances of human error. In particular, controllers are key to automating enterprise-wide upgrade and patching processes at scale. Instead of upgrading individual switches and routers one at a time, an upgrade intention is set at the controller level that automates the entire process of upgrades in stages.

◉ Controllers can run network-wide checks—available storage space, uptime criticality, version control— to ensure readiness before upgrading the image for each type and location of device.

◉ Controllers automatically search and download images from Cisco Cloud repositories based on feature sets and network device types currently in use.

◉ The correct golden images are automatically staged to each switch and router, eliminating the need for an operator to manually copy and monitor the progress one at a time.

◉ Based on the uptime criteria of each section of the network, the actual upgrades are scheduled for the most appropriate time and automatically started.

◉ Controllers perform a pre-check of network devices to catalogue current network operating statistics such as number of clients, number of ports in use, and traffic levels.

◉ During the post-check phase, controllers observe the impact of the upgrades on the network by comparing pre-check statistics with post-upgrade statistics to ensure that the network is operating as expected.

◉ IT can add customized lists of pre- and post-check items—such as ensuring applications (cloud and on-premise) are reachable and responding appropriately—to run before and after the upgrade.

◉ Should network operating parameters be negatively impacted, the controllers can automatically initiate a rollback of the update to the previous state.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Networking

These general steps outline a series of programmable events that are ultimately driven by the intents of the organization filtered through IT and the controllers. Controllers provide the ability to change an organization’s structure and operations much faster. For example, an enterprise can react more quickly when preparing for a new application being rolled out to hundreds of branch sites and the network needs an upgrade to local branch routers to ensure application quality of experience. Automating the updates from a central management controller saves time, travel, and quickly prepares the branches for new business processes.

Automation at scale becomes even more important when dealing with the scope of IoT network infrastructure. With a geographically distributed network of nodes that connect thousands of IoT devices in the field and factory, being able to reach out from a central cloud management controller and apply segmentation rules and security policies to protect device connections is a very practical method of securing them. In cases like these, centralized controller automation eliminates thousands of hours of technician truck rolls.

Open Controller APIs Extend Programmability for Third-Party Automations


Automated updates to routers, switches, and access points are only one of the ways a controller-based network increases enterprise speed and agility. With an open set of APIs, controllers can communicate with higher-level applications and services such as Application-Aware Firewalls and Internet Protocol Address Management (IPAM) tools. To help automate managing the network’s health and support, IT can also employ APIs to program controllers to send network device analytics and events to an IT Service Management (ITSM) system. In turn, the ITSM can send commands and approvals to controllers to kick-off specific actions, such as times to schedule upgrades on the network. The two-way communication through APIs provides IT with a flexible method to minimize hands-on technical operations and free up valuable talent for other projects.

Controllers Are Foundational for Building Intent-Based Networks


As the examples in this post show, controllers are the basis for implementing Intent-Based Networks that intelligently monitor, pinpoint abnormal operations, and proactively apply remedies to keep the network optimized for performance and security. They significantly reduce the hands-on manual labor traditionally required for upgrading and patching complex networks. Controllers automate pre-checks, post-checks, and rollbacks to ensure network continuity and protect against human error. Automation of these processes frees up IT talent to work on business transformation initiatives while making the network easier to change to meet new business needs.

Thursday 30 April 2020

Writing Production-ready Code; Nornir Edition

Let’s start with the least interesting topic first – documentation. You have to consider two aspects here:

1. Peripheral files: Files not part of the project, such as architecture diagrams or README files, are good places for newcomers to learn about your work. Be complete yet concise; describe how everything fits together as simply as you can. Invest 10 minutes to learn Markdown or ReStructured Text if you don’t know them.

2. The code itself: Perhaps you’ve heard the term “self-documenting code”. Python makes this easy as many of its idioms and semantics read like plain English. Resist the urge to use overly clever or complex techniques where they aren’t necessary. Comment liberally, not just for others, but as a favor to your future self. Developers tend to forget how their code works a few weeks after it has been written (at least, I know I do)!

I think it is beyond dispute that static code analysis tools such as linters, security scanners, and code formatters are great additions to any code project. I don’t have strong opinions on precisely which tools are the best, but I’ve grown comfortable with the following options. All of them can be installed using pip:

1. pylint: Python linter that checks for syntax errors, styling issues, and minor security issues
2. bandit: Python security analyzer that reports vulnerabilities based on severity and confidence
3. black: Python formatter to keep source code consistent (spacing, quotes, continuations, etc.)
4. yamllint: YAML syntax formatter; similar to pylint but for configuration files

Sometimes you won’t find a public linter for the code you care about. Time permitting, write your own. Because the narc project consumes JSON files as input, I wrote a simple jsonlint.py script that just finds all JSON files, attempts to parse Python objects from then, and fails if any exceptions are raised. That’s it. I’m only trying to answer the question “Is the file formatted correctly?” I’d rather know right away instead of waiting for Nornir to crash later.

failed = False
for varfile in os.listdir(path):
    if varfile.endswith(".json"):
        filepath = os.path.join(path, varfile)
        with open(filepath, "r") as handle:
            try:
                # Attempt to load the JSON data into Python objects
                json.load(handle)
            except json.decoder.JSONDecodeError as exc:
                # Print specific file and error condition, mark failure
                print(f"{filepath}: {exc}")
                failed = True

# If failure occurred, use rc=1 to signal an error
if failed:
    sys.exit(1)

These tools take little effort to deploy and have a very high “return on effort”. However, they are superficial in their test coverage and wholly insufficient by themselves. Most developers begin testing their code by first constructing unit tests. These test the smallest, atomic (indivisible) parts of a program, such as functions, methods, or classes. Like in electronics manufacturing, a component on a circuit board may be tested by measuring the voltage across two pins. This particular measurement is useless in the context of the board’s overall purpose, but is a critical component in a larger, complex system. The same concept is true for software projects.

It is conventional to contain all tests, unit or otherwise, in a tests/ directory parallel to the project’s source code. This is keeps things organized and allows for your code project and test structure to be designed differently. My jsonlint.py script lives here, along with several other files beginning with test_. This naming convention is common in Python projects to identify files containing tests. Popular Python testing tools/frameworks like pytest will automatically discover and execute them.

$ tree tests/
tests/
|-- data
| |-- cmd_checks.yaml
| `-- dummy_checks.yaml
|-- jsonlint.py
|-- test_get_cmd.py
`-- test_validation.py

Consider the test_get_cmd.py file first, which tests the get_cmd() function. This function takes in a dictionary representing an ASA rule to check, and expands it into a packet-tracer command that the ASA will understand. Some people call this “unparsing” as it transforms structured data into plain text. This process is deterministic and easy to test; given any dictionary, we can predict what the command should be. In the data/ directory, I’ve defined a few YAML files which contain these test cases. I usually recommend keeping static data out of your test code and instead developing general test processes instead. The narc project supports TCP, UDP, ICMP, and raw IP protocol flows. Therefore, my test file should have at least 4 cases. Using nested dictionaries, we can define individual cases that represent the chk input values, then the expected_cmd field contains the expected packet-tracer command. I think the file is self-explanatory, and you can check test_get_cmd.py to see how this file is consumed.

$ cat tests/data/cmd_checks.yaml
---
cmd_checks:
  tcp_full:
    in_intf: "inside"
    proto: "tcp"
    src_ip: "192.0.2.1"
    src_port: 5001
    dst_ip: "192.0.2.2"
    dst_port: 5002
    expected_cmd: >-
      packet-tracer input inside tcp
      192.0.2.1 5001 192.0.2.2 5002 xml
  udp_full:
    in_intf: "inside"
    proto: "udp"
    src_ip: "192.0.2.1"
    src_port: 5001
    dst_ip: "192.0.2.2"
    dst_port: 5002
    expected_cmd: >-
      packet-tracer input inside udp
      192.0.2.1 5001 192.0.2.2 5002 xml
  icmp_full:
    in_intf: "inside"
    proto: "icmp"
    src_ip: "192.0.2.1"
    dst_ip: "192.0.2.2"
    icmp_type: 8
    icmp_code: 0
    expected_cmd: >-
      packet-tracer input inside icmp
      192.0.2.1 8 0 192.0.2.2 xml
  rawip_full:
    in_intf: "inside"
    proto: 123
    src_ip: "192.0.2.1"
    dst_ip: "192.0.2.2"
    expected_cmd: >-
      packet-tracer input inside rawip
      192.0.2.1 123 192.0.2.2 xml
...

All good code projects perform some degree of input data validation. Suppose a user enters an IPv4 address of 192.0.2.1.7 or a TCP port of -1. Surely the ASA would throw an error message, but why let it get to that point? Problems don’t get better over time, and we should test for these conditions early. In general, we want to “fail fast”. That’s what the test_validation.py script does and it works in conjunction with the dummy_checks.yml file. Invalid “check” dictionaries should be logged and not sent to the network device.

As a brief aside, data validation is inherent when using modeling languages like YANG. This is one of the reasons why model-driven programmability and telemetry are growing in popularity. In addition to removing the arbitrariness of data structures, it enforces data compliance without explicit coding logic.

We’ve tested quite a bit so far, but we haven’t tied anything together yet. Always consider building in some kind of integration/system level testing to your project. For narc, I introduced a feature named “dryrun” and it is easily toggled using a CLI argument at runtime. This code bypasses the Netmiko logic and instead generates simulated (sometimes called “mocked”) XML output for each packet-tracer command. This runs instantly and doesn’t require access to any network devices. We don’t really care if the rules pass or fail (hint: they’ll always pass), just that the solution is plumbed together correctly.

The diagram below illustrates how mocking works at a high level, and the goal is to keep the detour as short and as transparent as possible. You want to maximize testing before and after the mocking activity. Given Nornir’s flexible architecture with easy-to-define custom tasks, I’ve created a custom _mock_packet_trace task. It looks and feels like network_send_command as it returns an identical result, but is designed for local testing.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides

How do we tie this seemingly complex string of events together? Opinions on this topic, as with everything in programming, run far and wide. I’m old school and prefer to use a Makefile, but more modern tools exist like Task which are YAML-based and less finicky. Some people just prefer shell scripts. Makefiles were traditionally used to compile code and link the resulting objects in languages like C. For Python projects, you can create “targets” or “goals” to run various tasks. Think of each target as a small shell script. For example, make lint will run the static code analysis tools pylint, bandit, black, yamllint, and the jsonlint.py script. Then, make unit will run pytest on all test_*.py files. Finally, make dry will execute the Nornir runbook in dryrun mode, testing the system as a whole (minus Netmiko) with mock data. You can also create operational targets unrelated to the project code. I often define make clean to remove any application artifacts, Python byte code .pyc files, and logs.

Rather than having to type out all of these targets, a single target can reference other targets. For example, consider the make test target which runs all 4 targets in the correct sequence. You can simplify it further by defining a “default goal” so that when only make is typed, it invokes make test. We developers are lazy and cherish saving 5 keystrokes per test run!

.DEFAULT_GOAL := test
.PHONY: test
test: clean lint unit dry

Ideally, typing make should test your entire project from the simplest syntax checking to the most involved integration/system testing. Here’s the full, unedited logs from my dev environment relating to the narc project. I recommend NOT obscuring your command outputs; it is useful to see which commands have generated which outputs.

$ make
Starting clean
find . -name "*.pyc" | xargs -r rm
rm -f nornir.log
rm -rf outputs/
Starting clean
Starting lint
find . -name "*.yaml" | xargs yamllint -s
python tests/jsonlint.py
find . -name "*.py" | xargs pylint

--------------------------------------------------------------------
Your code has been rated at 10.00/10 (previous run: 10.00/10, +0.00)

find . -name "*.py" | xargs bandit --skip B101
[main] INFO profile include tests: None
[main] INFO profile exclude tests: None
[main] INFO cli include tests: None
[main] INFO cli exclude tests: B101
[main] INFO running on Python 3.7.3
Run started:2020-04-07 15:47:27.239623

Test results:
        No issues identified.

Code scanned:
        Total lines of code: 670
        Total lines skipped (#nosec): 0

Run metrics:
        Total issues (by severity):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
        Total issues (by confidence):
                Undefined: 0.0
                Low: 0.0
                Medium: 0.0
                High: 0.0
Files skipped (0):
find . -name "*.py" | xargs black -l 85 --check
All done!
11 files would be left unchanged.
Completed lint
Starting unit tests
python -m pytest tests/ --verbose
================= test session starts ==================
platform linux -- Python 3.7.3, pytest-5.3.2, py-1.8.0, pluggy-0.13.1 -- /home/centos/environments/asapt/bin/python
cachedir: .pytest_cache
rootdir: /home/centos/code/narc
collected 11 items

tests/test_get_cmd.py::test_get_cmd_tcp PASSED [ 9%]
tests/test_get_cmd.py::test_get_cmd_udp PASSED [ 18%]
tests/test_get_cmd.py::test_get_cmd_icmp PASSED [ 27%]
tests/test_get_cmd.py::test_get_cmd_rawip PASSED [ 36%]
tests/test_validation.py::test_validate_id PASSED [ 45%]
tests/test_validation.py::test_validate_in_intf PASSED [ 54%]
tests/test_validation.py::test_validate_should PASSED [ 63%]
tests/test_validation.py::test_validate_ip PASSED [ 72%]
tests/test_validation.py::test_validate_proto PASSED [ 81%]
tests/test_validation.py::test_validate_port PASSED [ 90%]
tests/test_validation.py::test_validate_icmp PASSED [100%]

======================================= 11 passed in 0.09s ========================================
Completed unit tests
Starting dryruns
python runbook.py --dryrun --failonly
head -n 5 outputs/*
==> outputs/result.csv <==
host,id,proto,icmp type,icmp code,src_ip,src_port,dst_ip,dst_port,in_intf,out_intf,action,drop_reason,success

==> outputs/result.json <==
{}
==> outputs/result.txt <==
python runbook.py -d -s
ASAV1@2020-04-07T15:47:28.873590: loading YAML vars
ASAV1@2020-04-07T15:47:28.875094: loading vars succeeded
ASAV1@2020-04-07T15:47:28.875245: starting check DNS OUTBOUND (1/5)
ASAV1@2020-04-07T15:47:28.875291: completed check DNS OUTBOUND (1/5)
ASAV1@2020-04-07T15:47:28.875304: starting check HTTPS OUTBOUND (2/5)
ASAV1@2020-04-07T15:47:28.875333: completed check HTTPS OUTBOUND (2/5)
ASAV1@2020-04-07T15:47:28.875344: starting check SSH INBOUND (3/5)
ASAV1@2020-04-07T15:47:28.875371: completed check SSH INBOUND (3/5)
ASAV1@2020-04-07T15:47:28.875381: starting check PING OUTBOUND (4/5)
ASAV1@2020-04-07T15:47:28.875406: completed check PING OUTBOUND (4/5)
ASAV1@2020-04-07T15:47:28.875415: starting check L2TP OUTBOUND (5/5)
ASAV1@2020-04-07T15:47:28.875457: completed check L2TP OUTBOUND (5/5)
ASAV2@2020-04-07T15:47:28.878727: loading JSON vars
ASAV2@2020-04-07T15:47:28.878880: loading vars succeeded
ASAV2@2020-04-07T15:47:28.879018: starting check DNS OUTBOUND (1/5)
ASAV2@2020-04-07T15:47:28.879060: completed check DNS OUTBOUND (1/5)
ASAV2@2020-04-07T15:47:28.879073: starting check HTTPS OUTBOUND (2/5)
ASAV2@2020-04-07T15:47:28.879100: completed check HTTPS OUTBOUND (2/5)
ASAV2@2020-04-07T15:47:28.879110: starting check SSH INBOUND (3/5)
ASAV2@2020-04-07T15:47:28.879136: completed check SSH INBOUND (3/5)
ASAV2@2020-04-07T15:47:28.879146: starting check PING OUTBOUND (4/5)
ASAV2@2020-04-07T15:47:28.879169: completed check PING OUTBOUND (4/5)
ASAV2@2020-04-07T15:47:28.879179: starting check L2TP OUTBOUND (5/5)
ASAV2@2020-04-07T15:47:28.879202: completed check L2TP OUTBOUND (5/5)
head -n 5 outputs/*
==> outputs/result.csv <==
host,id,proto,icmp type,icmp code,src_ip,src_port,dst_ip,dst_port,in_intf,out_intf,action,drop_reason,success
ASAV1,DNS OUTBOUND,udp,,,192.0.2.2,5000,8.8.8.8,53,UNKNOWN,UNKNOWN,ALLOW,,True
ASAV1,HTTPS OUTBOUND,tcp,,,192.0.2.2,5000,20.0.0.1,443,UNKNOWN,UNKNOWN,ALLOW,,True
ASAV1,SSH INBOUND,tcp,,,fc00:172:31:1::a,5000,fc00:192:0:2::2,22,UNKNOWN,UNKNOWN,DROP,dummy,True
ASAV1,PING OUTBOUND,icmp,8,0,192.0.2.2,,8.8.8.8,,UNKNOWN,UNKNOWN,ALLOW,,True

==> outputs/result.json <==
{
  "ASAV1": {
    "DNS OUTBOUND": {
      "Phase": [
        {

==> outputs/result.txt <==
ASAV1 DNS OUTBOUND -> PASS
ASAV1 HTTPS OUTBOUND -> PASS
ASAV1 SSH INBOUND -> PASS
ASAV1 PING OUTBOUND -> PASS
ASAV1 L2TP OUTBOUND -> PASS
Completed dryruns

OK, so now we have a way to regression test an entire project, but it still requires manual human effort as part of a synchronous process: typing make, waiting for completion, observing results, and taking follow-on actions as needed. If your testing takes more than a few seconds, waiting will get old fast. A better solution would be automatically starting these tests whenever your code changes, then recording the results for review later. Put another way, when I type git push, I want to walk away with certainty that my updates will be tested. This is called “Continuous Integration” or CI, and is very easy to setup. There are plenty of solutions available: Gitlab CI, GitHub Actions (new), Circle CI, Jenkins, and many more. I’m a fan of Travis CI, and that’s what I’ve used for narc. Almost all of these solutions use a YAML file that defines the sequence in which test phases are executed. Below is the .travis.yml file from the project in question. The install phase installs all packages in the requirements.txt file using pip, and subsequence phases run various make targets.

$ cat .travis.yml
---
language: "python"
python:
  - "3.7"

# Install python packages for ansible and linters.
install:
  - "pip install -r requirements.txt"

# Perform pre-checks
before_script:
  - "make lint"
  - "make unit"

# Perform runbook testing with mock ASA inputs.
script:
  - "make dry"
...

Assuming you’ve set up Travis correctly (outside of scope for this blog), you’ll see your results in the web interface which clearly show each testing phase and the final results.

Cisco Prep, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides

And that, my friends, is how you build a professional code project!