Monday, 7 June 2021

Education, Education, Education: RSA 2021 and the State of Education Security

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Learning

There is an old maxim in the real estate profession that is used when evaluating the value of a home. Realtors often speak of “location, location, location”, as if the customer involved in the transaction is so unaware of that factor that it requires the incessant repetition. In cybersecurity, however, one area that is in dire need of a recurrent reminder is the area of education, both of cybersecurity professionals, as well as targeting that specialized knowledge towards the education sector.

Resilience, and Investing in People

This year’s RSA conference was started with an inspirational keynote message from CEO Charles (Chuck) Robbins. The theme of this year’s RSA conference was resilience, which is also the key to effective cybersecurity. The vision for a post-pandemic world is one where Cisco will invest more to make the world a safer place, while carrying out that vision in less time than ever.


Part of Cisco’s investment in the future is not only about technology, it is about people. There are around 2.8 million cyber professionals globally, but there are currently more than 4 million unfilled cybersecurity jobs. There is no other industry where the open positions exceed the number of available positions at such a grand scale. This is the equivalent of the entire population of many small countries. Cisco is seeking not only to enable the workforce by looking at the existing talent pool, but by also tapping into unconventional places to find new talent. Unlikely security professionals exist in places like the local coffee shop, the mechanic’s garage, and even the prisons.

This extreme reach for diversity is rooted firmly in history. When the world needed to solve the encryption puzzle used by the enemies in World War Two, they sought people from all walks of life to decipher what seemed like an unbreakable code. They were not all mathematicians. They included librarians, psychologists, and even hobbyists who collected porcelain figurines.

Diversity is a force multiplier towards solving outwardly unsolvable problems.

An Unnoticed Target


Education towards creating a stronger workforce is useless if not applied to business sectors that need it the most. One sector where there is a need for cybersecurity professionals is the area of education. In the 2018 “End-of-Year Data Breach Report” issued by the Identity Theft Research Center (ITRC), there were over 1.4 Million records breached at educational institutions. These numbers closely matched the breach numbers of 2017 for the education sector. However, over the course of 2019, the breached records increased to over 2.4 Million.

While the education sector falls last among the five industries monitored in the ITRC reports, there appears to be a pattern emerging.

Wendy’s Keen Insights


Cisco’s Head of Advisory CISOs, Wendy Nather, and Dr. Wade Baker, of the Cyentia Institute opened the final day of the 2021 RSA conference with by asking the question “What (Actually, Measurably) Makes a Security Program More Successful?”

Wendy stated that she dislikes benchmarks, mostly because some people are not good at it, offering more opinion that measurable results. In order to measure success, we must be more interested in what works. Wendy and Wade drew upon the findings of the Cisco 2020 Security Outcomes Study to discuss a methodology that is measurable, and actionable.

Follow the Patterns


The Security Outcomes Study findings are based on patterns, rather than raw numbers, and this is important when considering the rise in educational breaches. Valuable insights are derived by finding patterns in the data that show clear correlations between security practices, and the outcomes. As a cybersecurity professional, the idea of finding patterns that show clear correlations should resonate deeply, as this is a foundational tenet of your entire discipline of threat intelligence.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Learning

Ignoring a pattern just because it is deceptively insignificant at the time can lead to an instance of not seeing the shape of things to come. Are we on the precipice of witnessing a new target? The people at Cisco do not agree with the logic of ignoring it, hoping it will go away.

Why a School is a Good Target


It may seem like a school, or university is not a very lucrative target for a cyberattack, but when one stops to think about it, an educational institution contains a rich variety of valuable information, more than just the books in the student libraries and the fraternity and sorority houses.

Schools are fertile grounds for ideas, and inspirational knowledge. These are the roots of intellectual property.  In fact, some schools are branded as research universities. This means that the information about the students who are working on research, as well as the research itself, are viable targets for a cybercriminal.

How Cisco is Positioned to Protect These Valuable Assets


Cisco is uniquely qualified to protect all learning institutions by offering a wide range of security solutions and products to safeguard all educational institutions, from the earliest grades, all the way up to institutions of higher learning.

Whether it is managing the in-person and remote students and their mobile devices, to fostering a productive learning environment, to protecting sensitive student and research data, Cisco offers a wide range of solutions to meet your goals, and ensure an effective approach to your security vision.

There is more to a security solution than the platform. The depth of information, and flexibility and pragmatism is key towards a full security approach. As described by the CISO of Brunel University, “Cisco backs its products with engineers who are at the top of their game”.

Source: cisco.com

Sunday, 6 June 2021

Stretching Cisco Designed Oracle Infrastructures with Low Latency Protocols

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Exam Prep, Cisco Preparation

Before the pandemic, industries were turned upside down as a digital transformation wave forced IT departments to think of new ways to implement services and address this new business challenge. When business travel starts up again, each of us will see examples: taxi’s replaced by Uber and Lyft; newspapers replaced by a smartphone; radio replaced by Spotify. Each industry struggles to remain relevant. The impact on IT?  The huge growth in applications that draw data from more sources, and the speed to implement required today. Oracle databases and the server infrastructures that support have to support larger workloads without sacrificing performance. The challenge is how to architect these systems to meet uncertain growth requirements yet keep their finance department happy.

Read More: 500-173: Designing the FlexPod Solution (FPDESIGN)

Cisco foresaw this requirement a couple of years ago and invested in a set of Cisco Validated Designs demonstrating the benefits of NVMe (Non-Volatile Memory Express) over Fabrics partnering with Pure Storage initially and more recently with NetApp.

Customers generally fall into two categories:

◉ Those running I/O over ethernet and would more naturally move to RDMA

◉ SAN based customers who desire low latency but within a SAN infrastructure

Cisco has developed a proven solution for each of these two scenarios, see details below.

In 2019, Cisco and Pure Storage tested and validated a FlashStack solution highlighting the benefits of RoCE V2 – Oracle RAC 19c Databases running on Cisco UCS with Pure Storage FlashArray //X90R2 using NVMe-oF RoCE V2 (RoCE  – RDMA over Converged Ethernet version 2). Here the standard FlashStack Converged Infrastructure (depicted below) was set up with NVMe located in the servers and used RoCE to move the data traffic between the servers and the All-Flash storage subsystem.  SLOB (Silly Little Oracle Benchmark) was used to replicate users and the system was scaled to 512 users demonstrating the following benefits:

◉ Lower latency compared to other traditional protocols

◉ Higher IOPS (I/O per second) and scaled linearly

◉ Higher bandwidth to address higher data traffic requirements

◉ Improved protocol efficiency by reducing the “I/O stack”

◉ Lower host CPU utilization, documented at 30% less

◉ Indirectly, as CPU utilization was lowered, more processor cycles are available to process work, therefore fewer Intel processor cores need to be licensed to achieve performance.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Exam Prep, Cisco Preparation

This was a welcome design incorporated by many companies from commercial to large enterprise as it addressed a pressing need – how to stretch the IT budget to complete more work on the current system.  The NVMe interface is defined to enable host software to communicate with nonvolatile memory over PCI Express (PCIe). It was designed from the ground up for low-latency solid state media, eliminating many of the bottlenecks seen in the legacy protocols for running enterprise applications. NVMe devices are connected to the PCIe bus inside a server. NVMe-oF extends the high-performance and low-latency benefits of NVMe across network fabrics that connect servers and storage. NVMe-oF takes the lightweight and streamlined NVMe command set, and the more efficient queueing model, and replaces the PCIe transport with alternate transports, like Fibre Channel, RDMA over Converged Ethernet (RoCE v2), TCP.

In 2020, the Pandemic hit.

COVID-19 caused many IT organizations to shift focus from database to remote worker implementations initially conceived as short-term solutions, now moving to longer term designs. Businesses are returning to a focus on stretching their database infrastructure solutions, and Cisco has partnered with NetApp on a new solution to meet this goal.

In April 2021, Cisco and NetApp published a new Cisco Validated Design called FlexPod Datacenter with Oracle 19c RAC Databases on Cisco UCS and NetApp AFF with NVMe/FC. The proven design using NVMe is now proven work with a Fibric Channel twist.

NVMe over Fibre Channel (NVMe/FC) is implemented through the Fibre Channel NVMe (FC-NVMe) standard which is designed to enable NVMe based message commands to transfer data and status information between a host computer and a target storage subsystem over a Fibre Channel network fabric. FC-NVMe simplifies the NVMe command sets into basic FCP instructions. Because Fibre Channel is designed for storage traffic, functionality such as discovery, management and end-to-end qualification of equipment is built into the system.

Almost all high-performance latency sensitive applications and workloads are running on the same underlying transport protocol (FCP) today. Because NVMe/FC and Fibre Channel networks use the same FCP, they can use common hardware components. It’s even possible to use the same switches, cables, and NetApp ONTAP target port to communicate with both protocols at the same time. The ability to use either protocol by itself or both at the same time on the same hardware makes transitioning from FCP to NVMe/FC both simple and seamless.

Large-scale block flash-based storage environments that use Fibre Channel are the most likely to adopt NVMe over FC. FC-NVMe offers the same structure, predictability and reliability characteristics for NVMe-oF that Fibre Channel does for SCSI. Plus, NVMe-oF traffic and traditional SCSI-based traffic can run simultaneously on the same FC fabric.

The design for new FlexPod is depicted below and follows the proven design that has led FlexPod to become a most popular Converged Infrastructure in the market for several years.

The same low latency, high performance benefits of the previous CVD are proven once again in this NVM/FC design.  As such, customers now have a choice as to how to implement a modern SAN to run the heart of their IT shop – the Oracle Database.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Exam Prep, Cisco Preparation

Business will continue to challenge their IT departments, some are planned challenges while others are completely unforecasted. Picking a design that can grow to meet these future requirements, where each element in the design can be upgraded independently as circumstances warrant, while meeting performance requirements with an eye toward Oracle licensing costs is the challenge that Cisco’s low latency solutions have met. These are the solutions your organization should take a closer look at for your future Oracle deployments.

Source: cisco.com

Saturday, 5 June 2021

“Hello IKS”… from Terraform Cloud!

Organizations are seeking uniformity in tools and procedures

Tracking industry trends, some of the legacy enterprise applications will be modernized in a microservices architecture and containerized. While some of the microservices and heritage apps will remain on-prem. Others will make their way to public clouds. In general, DevOps has been very successful in leveraging open source tools, such as Terraform, for public cloud infrastructure provisioning. For example, enterprises are seeking to bring the cloud experience on-prem by providing their DevOps and application developers with IT services like CAAS (Container As A Service).

More Info: 350-801: Implementing Cisco Collaboration Core Technologies (CLCOR)

Organizations are seeking uniformity in tools and procedures that they use to orchestrate their cloud stacks across public and private clouds to host these containerized workloads.

Intersight Kubernetes Service (IKS) container management platform

The debate on container orchestration frameworks has pretty much concluded (at least for now!) and Kubernetes is a clear winner. Organizations have successfully leveraged Kubernetes services (AKS, EKS, GKE,..) from public clouds and Terraform has played a prominent role in their CI/CD toolchain. To support containerized workload deployments and operations, Cisco Intersight includes IKS (Intersight Kubernetes Service) which is a SaaS-delivered, turn-key container management platform for multicloud and production-grade Kubernetes.

The following use case attempts to highlight the integration that was recently announced between Cisco Intersight and HashiCorp Cloud for Business.

Cisco Intersight and HashiCorp Cloud for Business use case

In this blog, we will walk through a simple use case where:

◉ A cloud admin would offer CaaS (containers as a service) in their service catalog, leveraging IKS (Intersight Kubernetes Service) to set up the ippools and Kubernetes policies for an app team in her enterprise

◉ An App DevOps then comes in and leverages those policies to provision an IKS cluster based on the specification of the App developers for the cluster and finally

◉ An App Developer would deploy a sample app.

The above will leverage TFCB (Terraform Cloud For Business), IST (Intersight Service for Terraform), IKS (Intersight Kubernetes Service), Intersight Terraform Provider and Helm Terraform provider.

Following assumes that the configuration and provisioning is all done with Terraform Cloud UI (traditional ClickOps). Please watch out for subsequent blogs that will address the same using Intersight API’s for end-to-end programmability.

Role of a Cloud Admin

You will provision the following Targets in Intersight and verify for a Connected operational status:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

You will set up the Terraform config files and workspaces for provisioning ippools and policies for the app team and execute the Terraform plan in TFCB. An example can be found here

Role of an App DevOps

Based on the infrastructure requirements provided by your app team, you will set up the Terraform config files and workspaces to provision an IKS cluster leveraging the policies configured by your Cloud Admin. You will plan and execute the Terraform plan in TFCB. An example of the config file to provision a single node IKS cluster can be found here:

Role of an App Developer

You will set up the Terraform config files and workspaces for deploying a sample app on the IKS cluster provisioned by your DevOps. An example of the config file to deploy a sample app using the Terraform Helm Provider can be found here:

SandBox and learning lab

A sandbox and a learning lab are available here. It helps the user wear the hat of the above personas and walk through a sample deployment exercise:

The following captures a very high-level view of the sequence across the various tools in the sandbox and is quite self-explanatory. The Sandbox simulates your on prem infrastructure:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Check out this DevNet CodeExchange entry if you would like to experiment with a single-node cluster in your own vSphere infrastructure.

Behind the scenes…

The following highlights the value add of Cisco Intersight and TFCB integrations in simplifying and securely provisioning private cloud resources such as k8s clusters and applications on prem.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Source: cisco.com

Thursday, 3 June 2021

Too Fast Too Furious with Catalyst Wi-Fi 6 MU-MIMO

Servicing many clients that are using small packets with non-Wi-Fi 6 is inefficient because the overheads incurred by the preamble and other mechanisms tend to dominate. OFDMA is ideally suited for this scenario because it divides up the channel and services up to 37 users (for 80MHz bandwidth) simultaneously, which amortizes the overhead. OFDMA improves system efficiency, but it does not necessarily improve throughput.

MU-MIMO (Multi-User, Multiple input, Multiple output) creates spatially distinct separate channels between the transmitter and each of a small number of receivers such that each receiver hears only the information intended for itself, and not the information intended for other receivers. This means that the transmitter can, by superposition, transmit to a few receivers simultaneously, increasing the aggregate throughput by a factor equivalent to the number of receivers being serviced.

Cisco’s Catalyst 9800 series WLC with IOS XE 17.6.1 (currently Beta) introduces futuristic Access Point scheduler design, which efficiently serves multiple clients at the same time. This is done while creating least level of sounding overhead, which in turn yields data rates close to PHY rate even in dense environment. These advancements are currently supported on Catalyst 9130 and Catalyst 9124 series Access Points. Let’s first understand MU-MIMO concepts and then evaluate its performance.

Beamforming and MU-MIMO

Beamforming radio waves using an array of phased antennas has been known for decades. More recently the principles have been used to produce MU-MIMO where the concept of multiple simultaneous beams to provide independent channels for each of the users.

Similar principles apply in the audio domain where speakers can be phased to direct sound to a particular location. The idea is to adjust the phases of each speaker such that the sound adds constructively at the point where the listener is, and destructively at all other locations.

Consider a sound, Sr , played through an array of four speakers with the sound for each speaker adjusted by a phasor Q1r through Q4r so that the signal strength at the red listener, Lr is maximized, and the signal strength at the blue listener Lb is minimized.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Using superposition, we can take each message, impose the appropriate phase adjustment, and add the signals just before they go into the speakers. This way we can send two different messages at the same time, but each listener will hear only the message intended for them.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Note the importance of spatial separation – Lb and Lr are hearing their respective messages because the phasors were optimized to deliver each sound to their specific location. If one of the listeners moves from his position, he will no longer hear his message.

If a third person enters the picture and stands close to the speakers, he will hear the garbled sound of both messages simultaneously.

Consider this in the context of Wi-Fi where the speakers are replaced by antennas and the signal processing to control the phasors, and generate digital messages at a certain data rate, is done in the AP. Since both messages can be transmitted simultaneously one could theoretically double the aggregated data rate. The same approach can be used to service more clients simultaneously, so where is the limit? Practically, there are limits in the accuracy that the phasors can be set, there are reflections that cause “cross talk” and other imperfections that limit the gains in throughput that can be achieved.

Sniffing in the context of MU-MIMO is more complicated because of the spatial significance.  Note that placing a sniffer close to the AP will achieve the same garbled message effect we discussed earlier. The sniffer probe must be placed physically close to the device that is being sniffed, and generally one sniffer probe is required for each device.

System Overview and Test infrastructure


In this MU-MIMO test, we are using the octoScope (now part of Spirent) STACK-MAX testbed. On the infrastructure side, Cisco’s Catalyst 9800 WLC running IOS XE 17.6.1 (Beta code) and Catalyst 9130 Access point is used. The C9130 AP supports up to 8×8 uplink and downlink MU-MIMO with eight spatial streams. The Pal-6E is Wi-Fi 6 capable and can simulate up to 256 stations or can act as Sniffer probe.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

The STApal is a fully contained STA based upon the Intel AX210 chipset, running on its own hardware platform. All the test chambers are completely isolated from the outside world, and signal paths between them are controlled using fully shielded attenuators, so that reliable and repeatable measurements can be made. The chambers are lined with an RF absorptive foam to significantly reduce internal reflections and prevent standing waves.

For this MU-MIMO test we are using up to 4 STA’s. RF path connects signals from the C9130 AP through to individual STAs. We are using the multipath emulator (MPE) in LOS, or IEEE Channel Model A mode. Each pair of antennas is fed into a group of four clients as shown in the diagram below. We have seen that spatial separation is a requirement for successful MU-MIMO operation. This is achieved by placing antennas in the corners of the anechoic test chamber to get the best spatial separation. This allows four independent MU-MIMO streams to STAs in the four groups of four.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Practical testing


To demonstrate the MU-MIMO gains we placed C9130 AP in the center of the chamber and ran downlink UDP traffic to the STAs attached to the antennas in the box corners.

First, we did this with MU-MIMO switched off and started with one STA. We noted that the throughput was just a little over 1000 Mbps, a little less than the 1200 Mbps of the PHY rate.  After 20 seconds we introduced another STA and saw that the aggregate throughput stays at the 1000 Mbps, but that the two STAs share the channel and each STA is achieving 500 Mbps. 20 seconds later we introduced a third STA. Again the aggregate throughput stays the same at 1000 MBps, and the three STAs share the channel to get a little over 300 Mbps each. Introduction of the fourth STA follows the same pattern with the aggregate remaining unchanged, and each STA receiving 250 Mbps.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

We repeated the experiment, this time with MU-MIMO switched on.

Starting with one STA we achieved the familiar 1000 Mbps. After 20 seconds we introduced the second STA and observed the aggregate had increased to 2000 Mbps which is significantly higher than the PHY rate. We also noted that each STA is still receiving nearly the 1000 Mbps it was before.  Unlike the previous experiment where the STAs shared the channel, in this experiment they are each able to fully utilize their own channel independently of each other.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Adding a third STA increased the aggregate to 2200 Mbps. Each of the three STAs was still receiving 730 Mbps. Addition of a fourth STA results in aggregate throughput of 2100 Mbps with each STA receiving 525 Mbps, a two-fold increase over Single User operation.

The graph below summarizes the results.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Verdict


MU-MIMO exploits the spatial separation of receivers to direct independent messages to each of the receivers simultaneously. This allows for much more efficient use of the medium and increases the aggregate data that the network can deliver. Catalyst 9130 AP’s pioneering scheduler design offers superior throughput gains in Multiuser transmission scenarios. This is an outcome of higher MCS rates, low sounding overhead  and efficient dynamic packet scheduling.

DL and UL MU-MIMO along with OFDMA are enabled by default on a WLAN. These features are available on 9800 series wireless controllers on existing releases but the above discussed enhancements will be available from 17.6.1 (currently Beta) release onwards.

Source: cisco.com

Tuesday, 1 June 2021

Scalable Security with Cisco Secure Firewall Cloud Native

Today, companies invest in making their security controls scalable and dynamic to meet the ever-increasing demand on their network(s). In many cases, the response is a massive shift to Kubernetes® (K8s®) orchestrated infrastructure that provides a cloud-native, scalable, and resilient infrastructure.

This is where Cisco Secure Firewall Cloud Native (SFCN) comes in. It gives you the flexibility to provision, run, and scale containerized security services. Cisco Secure Firewall Cloud Native brings together the benefits of Kubernetes and Cisco’s industry-leading security technologies, providing a resilient architecture for infrastructure security at scale.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 1 – Cisco Secure Firewall Cloud Native platform overview

The architecture depicted above shows a modular platform that is scalable, resilient, DevOps friendly, and Kubernetes-orchestrated. In the initial release of Cisco Secure Firewall Cloud Native, we have added support for CNFW (L3/L4 + VPN) in AWS. Future releases will add support for CNTD (L7) security and other cloud providers.


Key capabilities of Cisco Secure Firewall Cloud Native include:

◉ Modular and scalable architecture
◉ Kubernetes orchestrated deployment
◉ DevOps friendly with Infrastructure-as-Code support (IaC)
◉ Data externalization for stateless services via a high-performance Redis™ database
◉ Multi-AZ, multi-region, and multi-tenant support

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 2 – Cisco Secure Firewall Cloud Native platform components

The architecture depicted above shows the Cisco Secure Firewall Cloud Native platform, which uses Amazon EKS, Amazon ElastiCache™, Amazon EFS with industry-leading Cisco VPN and L3/L4 security control for the edge firewall use-case. The administrator can manage Cisco Secure Firewall Cloud Native infrastructure using kubectl + YAML or Cisco Defense Orchestrator (CDO). Cisco provides APIs, CRDs, and Helm™ charts for this deployment. It uses custom metric and Kubernetes horizontal pod autoscaler (HPA) to scale pods horizontally.

Key components include:

◉ Control Point (CP): The Control Point is responsible for config validation, compilation and distribution, licensing, routes management. CP pods accept configuration from REST APIs, kubectl+YAML, or Cisco Defense Orchestrator.

◉ Enforcement Point (EP): CNFW EP pods are responsible for L3/L4 and VPN traffic handling and VPN termination.

◉ Redirector: Redirector pod is responsible for intelligent load balancing remote access VPN traffic. When the redirector receives a request, it contacts Redis DB and provides Fully Qualified Domain Name (FQDN) of the enforcement pods handling the least number of VPN sessions.

◉ Redis DB: The Redis database has information on VPN sessions. The redirector uses this information to enable smart load balancing and recovery. 

The following instance type is supported for each component.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep

Initial use-cases:

◉ Scalable Remote Access VPN architecture
◉ Scalable Remote Access VPN architecture with smart load balancing and session resiliency
◉ Scalable DC backhauls
◉ Multi-tenancy
◉ Scalable cloud hub
◉ Scalable edge firewall

Scalable Remote Access VPN architecture

Cisco Secure Firewall Cloud Native provides an easy way to deploy scalable remote access VPN architecture. It uses custom metrics and horizontal pod autoscaler to increase or decrease the number of CNFW Enforcement Points as needed. The Control Point controls configuration, routing, and Amazon Route 53™ configuration for the auto-scaled Enforcement Point.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 3 – Scalable Remote Access VPN architecture

Traffic flow:

1. The remote VPN user sends a DNS query for vpn.mydomain.com. Amazon Route 53 keeps track of all CNFW nodes, and it has “A record” for each node with weighted average load balancing enabled for incoming DNS requests.
2. The remote VPN user receives “Elastic IP – EIP” of the outside interfaces of the CNFW node.
3. The remote VPN user connects to the CNFW node. Each node provides a separate VPN pool for proper routing.

Scalable Remote Access VPN architecture, with smart load balancing and session resiliency

Cisco Secure Firewall Cloud Native architecture with smart load balancing uses Amazon ElastiCache (Redis DB) to store VPN session information. Redirector node consults Redis database to perform load balancing based VPN session count, instead of weighted average load balancing.

The Control Point controls configuration, routing, redirector configuration, and Route 53 configuration for the auto-scaled enforcement point.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 4 – Scalable Remote Access VPN architecture with smart load balancing and session resiliency

Traffic flow:

1. The remote VPN user sends a DNS query for vpn.mydomain.com, and vpn.mydomain.com points to the CNFW redirector.

2. The remote VPN user then sends the request to the redirector.

3. CNFW redirector periodically polls the Redis database (Amazon ElastiCache) to find out the FQDN of the Cisco Secure Firewall Cloud Native nodes with the least number of VPN endpoints. CNFW redirector provides FQDN of the least loaded CNFW node to the remote VPN user.

4. The remote user resolves FQDN, we automatically add “A” record for each CNFW enforcement point in Amazon Route 53.

5. The remote VPN user connects to the CNFW node that has the least number of VPN sessions.

Scalable DC backhauls

The autoscaled Enforcement Points can form a tunnel back to the data center automatically. Cisco provides a sample Kubernetes deployment to enable this functionality.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 5 – Scalable DC backhaul

Multi-tenancy

This architecture provides multi-tenant architecture using cloud-native constructs such as namespace, EKS cluster, nodes, subnets, and security groups.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 6 – Multi-tenancy

Scalable cloud hub

This architecture provides a scalable cloud architecture using CNFW, Amazon EKS, and other cloud native controls.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 7 – Scalable cloud hub

Scalable edge firewall

This architecture provides a scalable architecture using CNFW, Amazon EKS, and other cloud-native controls.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 8 – Scalable edge firewall

Licensing

Cisco Secure Firewall Cloud Native is available starting with ASA 9.16. This release brings CNFW (L3/L4 + VPN) security with Bring Your Own Licensing (BYOL), using Cisco Smart Licensing.

◉ Licenses are based on CPU cores used
◉ Supports multi-tenancy
◉ Unlicensed Cisco Secure Firewall Cloud Native EP runs at 100 Kbps
◉ AnyConnect license model is the same as the ASA AnyConnect license model

Source: cisco.com

Monday, 31 May 2021

Service Provider Digital Initiatives Drive Sustainable Business Value

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides

Service Provider Digital Maturity Index

In 2018, IDC developed the Service Provider Digital Maturity Index to define five levels of SP digital maturity. This index provides a roadmap to help SPs assess the progress of their digital journey versus their desired end state. The development of the Service Provider Digital Maturity Index was driven by IDC’s Service Provider Digital Readiness Survey, which analyzed the digital initiatives of 400 SPs worldwide and the business value derived from these efforts. The index measures digital maturity across seven SP domains (See Figure 1).

  SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides 

Figure 1. SP Seven Domain Model

In 2021, IDC conducted an updated study that produced a solid basis of comparison with 2018 results and provided the ability to identify where SPs have made progress and where challenges still exist, both at an overall level and within specific domains.

As SPs embarked on their digital journey, there were three key common business outcomes that all SPs were trying to achieve: improved customer experience, revenue growth/profitability, and development of new sources of revenue. The surveys conducted in 2018 and 2021 consistently show that Pioneers, which correspond to the highest level of digital maturity, enjoyed significant improvements in areas considered most strategic for SPs.

The 2021 survey results revealed that Pioneer SPs experienced the most significant business performance gains. They not only reported improved operational metrics such as reduced costs and process cycle times but importantly also reported improvements in key business outcomes such as revenue, profitability, customer satisfaction, and customer retention. Figure 2 depicts the most notable business improvements for Pioneer SPs compared to Ad-Hoc SPs, which correspond to the lower level of digital maturity.

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides
Figure 2. Pioneer SP Business Outcome Improvement


2021: The Evolution of SP Digital Maturity


In the three years since IDC developed the 2018 Service Provider Digital Maturity Index, several market dynamics have impacted SP strategies. These include an increased focus on customer experience, the SP drive to reduce costs, and increased competition on traditional and non-traditional players. These factors helped shape SPs’ digital strategies and initiatives. For the 2021 survey, we observed the following three key changes from 2018 related to SP digital transformation readiness.

1. The Role and Influence of IT

In 2018, most SPs had only a limited number of digital initiatives and had no real digital strategy.  According to the 2018 survey, 62% of organizations had developed DX (digital transformation) task teams responsible for driving individual DX projects (as there were no DX programs back then). Yet, most initiatives (76%) were driven by senior business leadership. IT primarily had a supporting role with responsibility only for implementing technologies related to DX projects. When it came to driving DX projects, IT ranked third behind business leadership and special DX organizations. In 2021, the roles for driving DX initiatives have shifted; IT has become the primary enabler (for 66% of DX initiatives), followed by specialized groups (30%) and senior business leaders (25%).

2. Shifting Business Priorities

In 2018, SPs were trying to recover from a couple of lean revenue years as demand for services shifted.  In the 2018 survey, IDC asked SPs to rank the reason why they undertook DX initiatives. Improving customer experience (#1) and driving revenue growth (#2) topped the list. Then COVID-19 happened, and SP businesses shifted their priorities. In 2021, revenue growth has dropped to #4, giving way to a focus on organizational efficiency (#1) and operational efficiency (#2). Customer experience is #3.

3. Challenges Are Less Daunting

In 2018, IDC asked respondents, “what are your top three challenges in meeting your Digital Transformation (DX) priorities?” A slight majority of SPs – 55% – replied, “our culture is too risk-averse.” SPs appear to be less risk-averse now and are committed to achieving business goals through their DX initiatives. Today’s top challenges are more structural: #1: their organizations are siloed, and #2: they do not yet have the right people/skills in-house. In 2021, SPs realize that organizational and cultural changes are needed to successfully execute their digital initiatives.

COVID-19 Impact


The COVID-19 pandemic has by far had the most significant impact on SPs’ digital strategies since 2018. The pandemic created a shift in business and consumer behavior for SPs that led to a greater dependence on secure network connectivity. With countries on lockdown and organizations worldwide shifting to a work-from-home model, SPs experienced a significant increase in demand for bandwidth for connectivity services.

IDC’s Service Provider Digital Readiness research tightly correlates digital maturity to improving business outcomes. The results of this year’s study revealed that Pioneer SPs had implemented digital technologies and created a level of business resiliency that enabled them to respond more quickly to the effects of the pandemic. According to IDC research, 73% of Pioneers were exceptionally prepared for COVID-19 compared to only 15% for all other SPs.

Source: cisco.com

Saturday, 29 May 2021

Cisco Secure Firewall insertion using Cisco cAPIC in Azure

In today’s world, enterprises are undergoing a transition to innovate rapidly, keep up with the competition, and increase application agility to meet ever-changing customer demands. To achieve these goals, they often choose the hybrid cloud infrastructure approach, choosing different infrastructure environments to deploy different types of applications. Some applications are best suited for hosting on-premises, whereas others are better suited for hosting in public cloud. Thus, hybrid cloud is the new normal for many organizations. However, in a hybrid cloud environment, the challenge is to maintain a uniform enterprise operational model, comply with corporate security policies, and gain visibility across the hybrid environments.

Read More: 300-710: Securing Networks with Cisco Firepower (SNCF)

Cisco Cloud Application Centric Infrastructure (Cisco Cloud ACI) is a comprehensive solution that provides:

◉ simplified operations

◉ consistent security policy management

◉ visibility across multiple on-premises data centers and public clouds or hybrid cloud environments

◉ unified security policy for the hybrid cloud

◉ extends on-premises layer-7 security to public cloud

In an on-premises Cisco ACI data center, Cisco Application Policy Infrastructure Controller (APIC) is the single point of policy configuration and management for all the Cisco ACI switches deployed in the data center. Cisco ACI multi-site orchestrator (MSO) provides a seamless way to interconnect multiple cisco ACI data centers. MSO is a software solution representing a single point of policy orchestration and visibility across multiple geographically dispersed ACI sites.

Cisco Cloud APIC runs natively on supported public clouds to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. Cisco Cloud APIC translates all the policies received from MSO and programs them into cloud-native constructs such as VNets (Virtual Network), application security groups, network security groups, outbound rules, inbound rules, etc. This new solution brings a suite of capabilities to extend on-premises data centers into true hybrid cloud architectures, helping drive policy and operational consistency regardless of where your applications reside. Also, it provides a single point of policy orchestration across hybrid environments, operational consistency, and visibility across clouds.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 1: Cisco ACI architecture for hybrid cloud

Figure 1 above shows the overall high-level architecture of Cisco Cloud ACI with Cisco ACI Multi-Site Orchestrator acting as a central policy controller, managing policies across on-premises Cisco ACI data centers, as well as Azure environment with each cloud site being abstracted by its own Cloud APICs.

Traditional firewall integration in on-prem Data Centers


To enable scalable and manageable network security in larger data center networks, on-prem Cisco Secure Firewalls (ASA and FTD) are integrated as “unmanaged” firewall (Cisco ASAv and FTDv/NGFWv) devices into existing ACI deployments. While existing ACI contracts can be easily leveraged for enforcing security policies within a single network security zone, insertion of ASA/FTD firewalls allows for segmented workload security for inter-zone traffic, thus reducing the load on leaf ACI switches.

Hybrid Cloud


The modern data center is a hybrid ecosystem, where some applications reside in classic on-prem environments, others are hosted in public cloud environments, or are co-located in both. Cisco cloud ACI provides a uniform mechanism for data center operations, policy management, and visibility in a similar data center environment spanning multiple on-prem, cloud, and hybrid infrastructure components. To seamlessly navigate between ACI-aware data centers and cloud-native environments like AWS or Azure, the Cisco cloud application policy infrastructure controller (cAPIC) functions as a universal translator that maps ACI-specific constructs (like service graphs or contracts) into CSP-specific language (like end-point groups or VPCs).

End-point groups (EPGs) represent applications running in the cloud, on-prem or hybrid environments. Service graphs represent L4-L7 devices inserted between EPGs, with ACI contracts and filtering rules defining inter-EPG communication scope and boundaries. cAPIC avails user-defined routing (UDR) to automatically obtain network or application-centric security rules based on the specific policy configuration and contracts that apply to different EPGs. While cAPIC automatically configures the network needs of most elements in a service graph, cloud-native firewalls (like on-prem firewalls in a traditional ACI-aware data center) are considered as unmanaged entities with firewall configuration managed outside of cAPIC.

NOTE: Granular and accurate mapping between these two network policy models is crucial to ensure the correct deployment of network policies across Cisco ACI and Microsoft Azure. Figure 2 below shows how Cloud APIC handles this policy mapping.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 2: Cisco ACI Policy Model to Microsoft Azure Policy Model mapping

Securing Azure with virtual ASA and FTD solutions


Cisco validated architecture for ASAv and NGFWv insertion in Azure using Cisco cAPIC L7 insertion. The following deployment scenarios have been validated as part of this effort.

◉ Multi-node (NGFWv LB sandwich)
◉ North/South and East/West traffic flow
     ◉ Spoke to Internet (N/S)
     ◉ Spoke to Spoke (E/W)
     ◉ Inter-region Spoke to Spoke (E/W)
     ◉ Internet to Spoke (N/S)
     ◉ Multi-AZ and Multi-Hub Architecture

Use case 1: Spoke to Internet (N/S traffic flows)


Test Scenario: Traffic from the workload destined to the internet is forwarded to Azure internal load balancer (ILB). ILB load balances traffic from the consumer EPGs to the internet through multiple Cisco Secure Firewalls (NGFWv).

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 3: Spoke to Internet (N/S traffic flows)

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We have a service graph with ILB redirect to Cisco Secure Firewalls.

Traffic Flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall

Firewall receives traffic, applies security policy, and sends it out via outside interface. Outbound traffic is SNAT on the firewall.

Consumer —— > NLB [redir ] + FTD[SNAT ] ———- > Internet

Use case 2: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement


Test scenario: Traffic from consumer EPG to provider EPS is load-balanced through multiple Cisco Secure Firewalls.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 4: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We use service graph with Network load balancer redirect, Cisco Secure Firewall and Application load balancer.

Traffic flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall
◉ Firewall receives traffic, applies security policy, and sends it to ALB
◉ ALB then sends it to provide (workloads).

Consumer —— > NLB [redir ] + FTD [SNAT] ———- > [ ALB —- > Multiple Provider]

Source: cisco.com