Saturday 5 June 2021

“Hello IKS”… from Terraform Cloud!

Organizations are seeking uniformity in tools and procedures

Tracking industry trends, some of the legacy enterprise applications will be modernized in a microservices architecture and containerized. While some of the microservices and heritage apps will remain on-prem. Others will make their way to public clouds. In general, DevOps has been very successful in leveraging open source tools, such as Terraform, for public cloud infrastructure provisioning. For example, enterprises are seeking to bring the cloud experience on-prem by providing their DevOps and application developers with IT services like CAAS (Container As A Service).

More Info: 350-801: Implementing Cisco Collaboration Core Technologies (CLCOR)

Organizations are seeking uniformity in tools and procedures that they use to orchestrate their cloud stacks across public and private clouds to host these containerized workloads.

Intersight Kubernetes Service (IKS) container management platform

The debate on container orchestration frameworks has pretty much concluded (at least for now!) and Kubernetes is a clear winner. Organizations have successfully leveraged Kubernetes services (AKS, EKS, GKE,..) from public clouds and Terraform has played a prominent role in their CI/CD toolchain. To support containerized workload deployments and operations, Cisco Intersight includes IKS (Intersight Kubernetes Service) which is a SaaS-delivered, turn-key container management platform for multicloud and production-grade Kubernetes.

The following use case attempts to highlight the integration that was recently announced between Cisco Intersight and HashiCorp Cloud for Business.

Cisco Intersight and HashiCorp Cloud for Business use case

In this blog, we will walk through a simple use case where:

◉ A cloud admin would offer CaaS (containers as a service) in their service catalog, leveraging IKS (Intersight Kubernetes Service) to set up the ippools and Kubernetes policies for an app team in her enterprise

◉ An App DevOps then comes in and leverages those policies to provision an IKS cluster based on the specification of the App developers for the cluster and finally

◉ An App Developer would deploy a sample app.

The above will leverage TFCB (Terraform Cloud For Business), IST (Intersight Service for Terraform), IKS (Intersight Kubernetes Service), Intersight Terraform Provider and Helm Terraform provider.

Following assumes that the configuration and provisioning is all done with Terraform Cloud UI (traditional ClickOps). Please watch out for subsequent blogs that will address the same using Intersight API’s for end-to-end programmability.

Role of a Cloud Admin

You will provision the following Targets in Intersight and verify for a Connected operational status:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

You will set up the Terraform config files and workspaces for provisioning ippools and policies for the app team and execute the Terraform plan in TFCB. An example can be found here

Role of an App DevOps

Based on the infrastructure requirements provided by your app team, you will set up the Terraform config files and workspaces to provision an IKS cluster leveraging the policies configured by your Cloud Admin. You will plan and execute the Terraform plan in TFCB. An example of the config file to provision a single node IKS cluster can be found here:

Role of an App Developer

You will set up the Terraform config files and workspaces for deploying a sample app on the IKS cluster provisioned by your DevOps. An example of the config file to deploy a sample app using the Terraform Helm Provider can be found here:

SandBox and learning lab

A sandbox and a learning lab are available here. It helps the user wear the hat of the above personas and walk through a sample deployment exercise:

The following captures a very high-level view of the sequence across the various tools in the sandbox and is quite self-explanatory. The Sandbox simulates your on prem infrastructure:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Check out this DevNet CodeExchange entry if you would like to experiment with a single-node cluster in your own vSphere infrastructure.

Behind the scenes…

The following highlights the value add of Cisco Intersight and TFCB integrations in simplifying and securely provisioning private cloud resources such as k8s clusters and applications on prem.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material

Source: cisco.com

Thursday 3 June 2021

Too Fast Too Furious with Catalyst Wi-Fi 6 MU-MIMO

Servicing many clients that are using small packets with non-Wi-Fi 6 is inefficient because the overheads incurred by the preamble and other mechanisms tend to dominate. OFDMA is ideally suited for this scenario because it divides up the channel and services up to 37 users (for 80MHz bandwidth) simultaneously, which amortizes the overhead. OFDMA improves system efficiency, but it does not necessarily improve throughput.

MU-MIMO (Multi-User, Multiple input, Multiple output) creates spatially distinct separate channels between the transmitter and each of a small number of receivers such that each receiver hears only the information intended for itself, and not the information intended for other receivers. This means that the transmitter can, by superposition, transmit to a few receivers simultaneously, increasing the aggregate throughput by a factor equivalent to the number of receivers being serviced.

Cisco’s Catalyst 9800 series WLC with IOS XE 17.6.1 (currently Beta) introduces futuristic Access Point scheduler design, which efficiently serves multiple clients at the same time. This is done while creating least level of sounding overhead, which in turn yields data rates close to PHY rate even in dense environment. These advancements are currently supported on Catalyst 9130 and Catalyst 9124 series Access Points. Let’s first understand MU-MIMO concepts and then evaluate its performance.

Beamforming and MU-MIMO

Beamforming radio waves using an array of phased antennas has been known for decades. More recently the principles have been used to produce MU-MIMO where the concept of multiple simultaneous beams to provide independent channels for each of the users.

Similar principles apply in the audio domain where speakers can be phased to direct sound to a particular location. The idea is to adjust the phases of each speaker such that the sound adds constructively at the point where the listener is, and destructively at all other locations.

Consider a sound, Sr , played through an array of four speakers with the sound for each speaker adjusted by a phasor Q1r through Q4r so that the signal strength at the red listener, Lr is maximized, and the signal strength at the blue listener Lb is minimized.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Using superposition, we can take each message, impose the appropriate phase adjustment, and add the signals just before they go into the speakers. This way we can send two different messages at the same time, but each listener will hear only the message intended for them.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Note the importance of spatial separation – Lb and Lr are hearing their respective messages because the phasors were optimized to deliver each sound to their specific location. If one of the listeners moves from his position, he will no longer hear his message.

If a third person enters the picture and stands close to the speakers, he will hear the garbled sound of both messages simultaneously.

Consider this in the context of Wi-Fi where the speakers are replaced by antennas and the signal processing to control the phasors, and generate digital messages at a certain data rate, is done in the AP. Since both messages can be transmitted simultaneously one could theoretically double the aggregated data rate. The same approach can be used to service more clients simultaneously, so where is the limit? Practically, there are limits in the accuracy that the phasors can be set, there are reflections that cause “cross talk” and other imperfections that limit the gains in throughput that can be achieved.

Sniffing in the context of MU-MIMO is more complicated because of the spatial significance.  Note that placing a sniffer close to the AP will achieve the same garbled message effect we discussed earlier. The sniffer probe must be placed physically close to the device that is being sniffed, and generally one sniffer probe is required for each device.

System Overview and Test infrastructure


In this MU-MIMO test, we are using the octoScope (now part of Spirent) STACK-MAX testbed. On the infrastructure side, Cisco’s Catalyst 9800 WLC running IOS XE 17.6.1 (Beta code) and Catalyst 9130 Access point is used. The C9130 AP supports up to 8×8 uplink and downlink MU-MIMO with eight spatial streams. The Pal-6E is Wi-Fi 6 capable and can simulate up to 256 stations or can act as Sniffer probe.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

The STApal is a fully contained STA based upon the Intel AX210 chipset, running on its own hardware platform. All the test chambers are completely isolated from the outside world, and signal paths between them are controlled using fully shielded attenuators, so that reliable and repeatable measurements can be made. The chambers are lined with an RF absorptive foam to significantly reduce internal reflections and prevent standing waves.

For this MU-MIMO test we are using up to 4 STA’s. RF path connects signals from the C9130 AP through to individual STAs. We are using the multipath emulator (MPE) in LOS, or IEEE Channel Model A mode. Each pair of antennas is fed into a group of four clients as shown in the diagram below. We have seen that spatial separation is a requirement for successful MU-MIMO operation. This is achieved by placing antennas in the corners of the anechoic test chamber to get the best spatial separation. This allows four independent MU-MIMO streams to STAs in the four groups of four.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Practical testing


To demonstrate the MU-MIMO gains we placed C9130 AP in the center of the chamber and ran downlink UDP traffic to the STAs attached to the antennas in the box corners.

First, we did this with MU-MIMO switched off and started with one STA. We noted that the throughput was just a little over 1000 Mbps, a little less than the 1200 Mbps of the PHY rate.  After 20 seconds we introduced another STA and saw that the aggregate throughput stays at the 1000 Mbps, but that the two STAs share the channel and each STA is achieving 500 Mbps. 20 seconds later we introduced a third STA. Again the aggregate throughput stays the same at 1000 MBps, and the three STAs share the channel to get a little over 300 Mbps each. Introduction of the fourth STA follows the same pattern with the aggregate remaining unchanged, and each STA receiving 250 Mbps.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

We repeated the experiment, this time with MU-MIMO switched on.

Starting with one STA we achieved the familiar 1000 Mbps. After 20 seconds we introduced the second STA and observed the aggregate had increased to 2000 Mbps which is significantly higher than the PHY rate. We also noted that each STA is still receiving nearly the 1000 Mbps it was before.  Unlike the previous experiment where the STAs shared the channel, in this experiment they are each able to fully utilize their own channel independently of each other.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Adding a third STA increased the aggregate to 2200 Mbps. Each of the three STAs was still receiving 730 Mbps. Addition of a fourth STA results in aggregate throughput of 2100 Mbps with each STA receiving 525 Mbps, a two-fold increase over Single User operation.

The graph below summarizes the results.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career, Cisco Preparation

Verdict


MU-MIMO exploits the spatial separation of receivers to direct independent messages to each of the receivers simultaneously. This allows for much more efficient use of the medium and increases the aggregate data that the network can deliver. Catalyst 9130 AP’s pioneering scheduler design offers superior throughput gains in Multiuser transmission scenarios. This is an outcome of higher MCS rates, low sounding overhead  and efficient dynamic packet scheduling.

DL and UL MU-MIMO along with OFDMA are enabled by default on a WLAN. These features are available on 9800 series wireless controllers on existing releases but the above discussed enhancements will be available from 17.6.1 (currently Beta) release onwards.

Source: cisco.com

Tuesday 1 June 2021

Scalable Security with Cisco Secure Firewall Cloud Native

Today, companies invest in making their security controls scalable and dynamic to meet the ever-increasing demand on their network(s). In many cases, the response is a massive shift to Kubernetes® (K8s®) orchestrated infrastructure that provides a cloud-native, scalable, and resilient infrastructure.

This is where Cisco Secure Firewall Cloud Native (SFCN) comes in. It gives you the flexibility to provision, run, and scale containerized security services. Cisco Secure Firewall Cloud Native brings together the benefits of Kubernetes and Cisco’s industry-leading security technologies, providing a resilient architecture for infrastructure security at scale.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 1 – Cisco Secure Firewall Cloud Native platform overview

The architecture depicted above shows a modular platform that is scalable, resilient, DevOps friendly, and Kubernetes-orchestrated. In the initial release of Cisco Secure Firewall Cloud Native, we have added support for CNFW (L3/L4 + VPN) in AWS. Future releases will add support for CNTD (L7) security and other cloud providers.


Key capabilities of Cisco Secure Firewall Cloud Native include:

◉ Modular and scalable architecture
◉ Kubernetes orchestrated deployment
◉ DevOps friendly with Infrastructure-as-Code support (IaC)
◉ Data externalization for stateless services via a high-performance Redis™ database
◉ Multi-AZ, multi-region, and multi-tenant support

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 2 – Cisco Secure Firewall Cloud Native platform components

The architecture depicted above shows the Cisco Secure Firewall Cloud Native platform, which uses Amazon EKS, Amazon ElastiCache™, Amazon EFS with industry-leading Cisco VPN and L3/L4 security control for the edge firewall use-case. The administrator can manage Cisco Secure Firewall Cloud Native infrastructure using kubectl + YAML or Cisco Defense Orchestrator (CDO). Cisco provides APIs, CRDs, and Helm™ charts for this deployment. It uses custom metric and Kubernetes horizontal pod autoscaler (HPA) to scale pods horizontally.

Key components include:

◉ Control Point (CP): The Control Point is responsible for config validation, compilation and distribution, licensing, routes management. CP pods accept configuration from REST APIs, kubectl+YAML, or Cisco Defense Orchestrator.

◉ Enforcement Point (EP): CNFW EP pods are responsible for L3/L4 and VPN traffic handling and VPN termination.

◉ Redirector: Redirector pod is responsible for intelligent load balancing remote access VPN traffic. When the redirector receives a request, it contacts Redis DB and provides Fully Qualified Domain Name (FQDN) of the enforcement pods handling the least number of VPN sessions.

◉ Redis DB: The Redis database has information on VPN sessions. The redirector uses this information to enable smart load balancing and recovery. 

The following instance type is supported for each component.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep

Initial use-cases:

◉ Scalable Remote Access VPN architecture
◉ Scalable Remote Access VPN architecture with smart load balancing and session resiliency
◉ Scalable DC backhauls
◉ Multi-tenancy
◉ Scalable cloud hub
◉ Scalable edge firewall

Scalable Remote Access VPN architecture

Cisco Secure Firewall Cloud Native provides an easy way to deploy scalable remote access VPN architecture. It uses custom metrics and horizontal pod autoscaler to increase or decrease the number of CNFW Enforcement Points as needed. The Control Point controls configuration, routing, and Amazon Route 53™ configuration for the auto-scaled Enforcement Point.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 3 – Scalable Remote Access VPN architecture

Traffic flow:

1. The remote VPN user sends a DNS query for vpn.mydomain.com. Amazon Route 53 keeps track of all CNFW nodes, and it has “A record” for each node with weighted average load balancing enabled for incoming DNS requests.
2. The remote VPN user receives “Elastic IP – EIP” of the outside interfaces of the CNFW node.
3. The remote VPN user connects to the CNFW node. Each node provides a separate VPN pool for proper routing.

Scalable Remote Access VPN architecture, with smart load balancing and session resiliency

Cisco Secure Firewall Cloud Native architecture with smart load balancing uses Amazon ElastiCache (Redis DB) to store VPN session information. Redirector node consults Redis database to perform load balancing based VPN session count, instead of weighted average load balancing.

The Control Point controls configuration, routing, redirector configuration, and Route 53 configuration for the auto-scaled enforcement point.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 4 – Scalable Remote Access VPN architecture with smart load balancing and session resiliency

Traffic flow:

1. The remote VPN user sends a DNS query for vpn.mydomain.com, and vpn.mydomain.com points to the CNFW redirector.

2. The remote VPN user then sends the request to the redirector.

3. CNFW redirector periodically polls the Redis database (Amazon ElastiCache) to find out the FQDN of the Cisco Secure Firewall Cloud Native nodes with the least number of VPN endpoints. CNFW redirector provides FQDN of the least loaded CNFW node to the remote VPN user.

4. The remote user resolves FQDN, we automatically add “A” record for each CNFW enforcement point in Amazon Route 53.

5. The remote VPN user connects to the CNFW node that has the least number of VPN sessions.

Scalable DC backhauls

The autoscaled Enforcement Points can form a tunnel back to the data center automatically. Cisco provides a sample Kubernetes deployment to enable this functionality.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 5 – Scalable DC backhaul

Multi-tenancy

This architecture provides multi-tenant architecture using cloud-native constructs such as namespace, EKS cluster, nodes, subnets, and security groups.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 6 – Multi-tenancy

Scalable cloud hub

This architecture provides a scalable cloud architecture using CNFW, Amazon EKS, and other cloud native controls.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 7 – Scalable cloud hub

Scalable edge firewall

This architecture provides a scalable architecture using CNFW, Amazon EKS, and other cloud-native controls.

Cisco Secure Firewall Cloud Native, Cisco Career, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Preparation, Cisco Prep
Figure 8 – Scalable edge firewall

Licensing

Cisco Secure Firewall Cloud Native is available starting with ASA 9.16. This release brings CNFW (L3/L4 + VPN) security with Bring Your Own Licensing (BYOL), using Cisco Smart Licensing.

◉ Licenses are based on CPU cores used
◉ Supports multi-tenancy
◉ Unlicensed Cisco Secure Firewall Cloud Native EP runs at 100 Kbps
◉ AnyConnect license model is the same as the ASA AnyConnect license model

Source: cisco.com

Monday 31 May 2021

Service Provider Digital Initiatives Drive Sustainable Business Value

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides

Service Provider Digital Maturity Index

In 2018, IDC developed the Service Provider Digital Maturity Index to define five levels of SP digital maturity. This index provides a roadmap to help SPs assess the progress of their digital journey versus their desired end state. The development of the Service Provider Digital Maturity Index was driven by IDC’s Service Provider Digital Readiness Survey, which analyzed the digital initiatives of 400 SPs worldwide and the business value derived from these efforts. The index measures digital maturity across seven SP domains (See Figure 1).

  SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides 

Figure 1. SP Seven Domain Model

In 2021, IDC conducted an updated study that produced a solid basis of comparison with 2018 results and provided the ability to identify where SPs have made progress and where challenges still exist, both at an overall level and within specific domains.

As SPs embarked on their digital journey, there were three key common business outcomes that all SPs were trying to achieve: improved customer experience, revenue growth/profitability, and development of new sources of revenue. The surveys conducted in 2018 and 2021 consistently show that Pioneers, which correspond to the highest level of digital maturity, enjoyed significant improvements in areas considered most strategic for SPs.

The 2021 survey results revealed that Pioneer SPs experienced the most significant business performance gains. They not only reported improved operational metrics such as reduced costs and process cycle times but importantly also reported improvements in key business outcomes such as revenue, profitability, customer satisfaction, and customer retention. Figure 2 depicts the most notable business improvements for Pioneer SPs compared to Ad-Hoc SPs, which correspond to the lower level of digital maturity.

SP360: Service Provider, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career, Cisco Guides
Figure 2. Pioneer SP Business Outcome Improvement


2021: The Evolution of SP Digital Maturity


In the three years since IDC developed the 2018 Service Provider Digital Maturity Index, several market dynamics have impacted SP strategies. These include an increased focus on customer experience, the SP drive to reduce costs, and increased competition on traditional and non-traditional players. These factors helped shape SPs’ digital strategies and initiatives. For the 2021 survey, we observed the following three key changes from 2018 related to SP digital transformation readiness.

1. The Role and Influence of IT

In 2018, most SPs had only a limited number of digital initiatives and had no real digital strategy.  According to the 2018 survey, 62% of organizations had developed DX (digital transformation) task teams responsible for driving individual DX projects (as there were no DX programs back then). Yet, most initiatives (76%) were driven by senior business leadership. IT primarily had a supporting role with responsibility only for implementing technologies related to DX projects. When it came to driving DX projects, IT ranked third behind business leadership and special DX organizations. In 2021, the roles for driving DX initiatives have shifted; IT has become the primary enabler (for 66% of DX initiatives), followed by specialized groups (30%) and senior business leaders (25%).

2. Shifting Business Priorities

In 2018, SPs were trying to recover from a couple of lean revenue years as demand for services shifted.  In the 2018 survey, IDC asked SPs to rank the reason why they undertook DX initiatives. Improving customer experience (#1) and driving revenue growth (#2) topped the list. Then COVID-19 happened, and SP businesses shifted their priorities. In 2021, revenue growth has dropped to #4, giving way to a focus on organizational efficiency (#1) and operational efficiency (#2). Customer experience is #3.

3. Challenges Are Less Daunting

In 2018, IDC asked respondents, “what are your top three challenges in meeting your Digital Transformation (DX) priorities?” A slight majority of SPs – 55% – replied, “our culture is too risk-averse.” SPs appear to be less risk-averse now and are committed to achieving business goals through their DX initiatives. Today’s top challenges are more structural: #1: their organizations are siloed, and #2: they do not yet have the right people/skills in-house. In 2021, SPs realize that organizational and cultural changes are needed to successfully execute their digital initiatives.

COVID-19 Impact


The COVID-19 pandemic has by far had the most significant impact on SPs’ digital strategies since 2018. The pandemic created a shift in business and consumer behavior for SPs that led to a greater dependence on secure network connectivity. With countries on lockdown and organizations worldwide shifting to a work-from-home model, SPs experienced a significant increase in demand for bandwidth for connectivity services.

IDC’s Service Provider Digital Readiness research tightly correlates digital maturity to improving business outcomes. The results of this year’s study revealed that Pioneer SPs had implemented digital technologies and created a level of business resiliency that enabled them to respond more quickly to the effects of the pandemic. According to IDC research, 73% of Pioneers were exceptionally prepared for COVID-19 compared to only 15% for all other SPs.

Source: cisco.com

Saturday 29 May 2021

Cisco Secure Firewall insertion using Cisco cAPIC in Azure

In today’s world, enterprises are undergoing a transition to innovate rapidly, keep up with the competition, and increase application agility to meet ever-changing customer demands. To achieve these goals, they often choose the hybrid cloud infrastructure approach, choosing different infrastructure environments to deploy different types of applications. Some applications are best suited for hosting on-premises, whereas others are better suited for hosting in public cloud. Thus, hybrid cloud is the new normal for many organizations. However, in a hybrid cloud environment, the challenge is to maintain a uniform enterprise operational model, comply with corporate security policies, and gain visibility across the hybrid environments.

Read More: 300-710: Securing Networks with Cisco Firepower (SNCF)

Cisco Cloud Application Centric Infrastructure (Cisco Cloud ACI) is a comprehensive solution that provides:

◉ simplified operations

◉ consistent security policy management

◉ visibility across multiple on-premises data centers and public clouds or hybrid cloud environments

◉ unified security policy for the hybrid cloud

◉ extends on-premises layer-7 security to public cloud

In an on-premises Cisco ACI data center, Cisco Application Policy Infrastructure Controller (APIC) is the single point of policy configuration and management for all the Cisco ACI switches deployed in the data center. Cisco ACI multi-site orchestrator (MSO) provides a seamless way to interconnect multiple cisco ACI data centers. MSO is a software solution representing a single point of policy orchestration and visibility across multiple geographically dispersed ACI sites.

Cisco Cloud APIC runs natively on supported public clouds to provide automated connectivity, policy translation, and enhanced visibility of workloads in the public cloud. Cisco Cloud APIC translates all the policies received from MSO and programs them into cloud-native constructs such as VNets (Virtual Network), application security groups, network security groups, outbound rules, inbound rules, etc. This new solution brings a suite of capabilities to extend on-premises data centers into true hybrid cloud architectures, helping drive policy and operational consistency regardless of where your applications reside. Also, it provides a single point of policy orchestration across hybrid environments, operational consistency, and visibility across clouds.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 1: Cisco ACI architecture for hybrid cloud

Figure 1 above shows the overall high-level architecture of Cisco Cloud ACI with Cisco ACI Multi-Site Orchestrator acting as a central policy controller, managing policies across on-premises Cisco ACI data centers, as well as Azure environment with each cloud site being abstracted by its own Cloud APICs.

Traditional firewall integration in on-prem Data Centers


To enable scalable and manageable network security in larger data center networks, on-prem Cisco Secure Firewalls (ASA and FTD) are integrated as “unmanaged” firewall (Cisco ASAv and FTDv/NGFWv) devices into existing ACI deployments. While existing ACI contracts can be easily leveraged for enforcing security policies within a single network security zone, insertion of ASA/FTD firewalls allows for segmented workload security for inter-zone traffic, thus reducing the load on leaf ACI switches.

Hybrid Cloud


The modern data center is a hybrid ecosystem, where some applications reside in classic on-prem environments, others are hosted in public cloud environments, or are co-located in both. Cisco cloud ACI provides a uniform mechanism for data center operations, policy management, and visibility in a similar data center environment spanning multiple on-prem, cloud, and hybrid infrastructure components. To seamlessly navigate between ACI-aware data centers and cloud-native environments like AWS or Azure, the Cisco cloud application policy infrastructure controller (cAPIC) functions as a universal translator that maps ACI-specific constructs (like service graphs or contracts) into CSP-specific language (like end-point groups or VPCs).

End-point groups (EPGs) represent applications running in the cloud, on-prem or hybrid environments. Service graphs represent L4-L7 devices inserted between EPGs, with ACI contracts and filtering rules defining inter-EPG communication scope and boundaries. cAPIC avails user-defined routing (UDR) to automatically obtain network or application-centric security rules based on the specific policy configuration and contracts that apply to different EPGs. While cAPIC automatically configures the network needs of most elements in a service graph, cloud-native firewalls (like on-prem firewalls in a traditional ACI-aware data center) are considered as unmanaged entities with firewall configuration managed outside of cAPIC.

NOTE: Granular and accurate mapping between these two network policy models is crucial to ensure the correct deployment of network policies across Cisco ACI and Microsoft Azure. Figure 2 below shows how Cloud APIC handles this policy mapping.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 2: Cisco ACI Policy Model to Microsoft Azure Policy Model mapping

Securing Azure with virtual ASA and FTD solutions


Cisco validated architecture for ASAv and NGFWv insertion in Azure using Cisco cAPIC L7 insertion. The following deployment scenarios have been validated as part of this effort.

◉ Multi-node (NGFWv LB sandwich)
◉ North/South and East/West traffic flow
     ◉ Spoke to Internet (N/S)
     ◉ Spoke to Spoke (E/W)
     ◉ Inter-region Spoke to Spoke (E/W)
     ◉ Internet to Spoke (N/S)
     ◉ Multi-AZ and Multi-Hub Architecture

Use case 1: Spoke to Internet (N/S traffic flows)


Test Scenario: Traffic from the workload destined to the internet is forwarded to Azure internal load balancer (ILB). ILB load balances traffic from the consumer EPGs to the internet through multiple Cisco Secure Firewalls (NGFWv).

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 3: Spoke to Internet (N/S traffic flows)

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We have a service graph with ILB redirect to Cisco Secure Firewalls.

Traffic Flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall

Firewall receives traffic, applies security policy, and sends it out via outside interface. Outbound traffic is SNAT on the firewall.

Consumer —— > NLB [redir ] + FTD[SNAT ] ———- > Internet

Use case 2: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement


Test scenario: Traffic from consumer EPG to provider EPS is load-balanced through multiple Cisco Secure Firewalls.

Cisco Secure Firewall, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
Figure 4: Spoke to spoke multi-node inter-VPC, intra-region traffic flow enablement

The above network topology depicts Cisco Secure Firewall in the hub VNET (overlay 2) in Azure. We use service graph with Network load balancer redirect, Cisco Secure Firewall and Application load balancer.

Traffic flow

◉ The consumer sends traffic to ILB.
◉ ILB receives traffic and forwards traffic to the firewall
◉ Firewall receives traffic, applies security policy, and sends it to ALB
◉ ALB then sends it to provide (workloads).

Consumer —— > NLB [redir ] + FTD [SNAT] ———- > [ ALB —- > Multiple Provider]

Source: cisco.com

Thursday 27 May 2021

Application Aware Networking with Cisco SD-WAN

Part 1: Seamless Connectivity From Anywhere

Today’s applications are no longer confined behind the walls of data centers. They are migrating to the cloud and being offered as Software-as-a-Service (SaaS). These applications are built on a distributed architecture model and run across different systems. This puts new demands on the network to become more application aware, programmable, and intelligent to deliver the best application experience to any user on any device, anywhere.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

In a Multicloud world, networks must become more application aware, programmable, and intelligent.
To be effective, modern networks must provide:

1. Dynamic, Seamless Connectivity from Anywhere
2. Programmable Functionality for optimal Quality of Experience
3. Adaptive Responsiveness to Application Health & Needs

Part 1 of this blog series will focus on providing seamless connectivity from anywhere. In part 2, we will explore how applications can “program” and cooperatively work with the network to provide a higher quality of experience by leveraging telemetry and other network health insights.

Cloud Networking


Enterprises are rapidly adopting a multi-cloud network architecture where applications migrate from datacenters to hybrid clouds based on specific organizational needs. To achieve this, they are seeking new solutions that facilitate the implementation and orchestration of a secured and optimized hybrid multi-cloud network that is independent of the physical topology. Ultimately, they want maximum flexibility with minimal disruption for as little effort as possible.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The variety of technologies used across clouds makes cloud interconnection quite challenging.

The variety of technologies used across clouds makes cloud interconnection quite challenging.
With public cloud adoption moving at an unprecedented pace, cloud service providers (CSP) attract customer workloads and applications onto their clouds through differentiated services and capabilities. The variety of technologies used across clouds makes cloud interconnection quite challenging as cloud networking architects must constantly learn new techniques to leverage each CSP’s specific networking features and functions. To simplify management, network professionals need products and solutions that can abstract these cloud networking differences through a single orchestration layer that bridges the different cloud systems.

SaaS


SaaS applications by nature are elastic and can serve users from anywhere. The modern network must not only provide connectivity to the sites where SaaS applications are hosted but be capable of dynamically selecting the site that will deliver the optimal SaaS experience to the user as well.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material

Application-Driven SD-WAN for Optimized Office 365 Experience

Cisco SD-WAN the Bridge to the Cloud


Cisco enables enterprises to take advantage of SD-WAN “overlay” networks to seamlessly connect users, machines, and applications across clouds and data centers. Moreover, our SD-WAN solution is fully aware of SaaS applications and provides an optimal path to them by programming the network with best path selection and adjusting it based on a combination of application and network telemetry.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Cisco SD-WAN with Azure Cloud

In partnership with Amazon AWS, Microsoft Azure, and Google GCP, we are enabling and delivering a superior application experience to securely connect users and devices to any cloud from anywhere.  By leveraging the inherent programmability of our SD-WAN fabric and that of our cloud partners, we have created a simplified yet highly consistent workflow that connects two endpoints while abstracting the differences of each underlying cloud.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Intent-Based SD-WAN Policy with Google Service Directory

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Automated site to AWS cloud connectivity including Transit Gateway orchestration using Cisco SD-WAN vManage including Security Policy and network segmentation. Full visibility into inter-regional transit traffic and telemetry with TGW Network Manager.

In partnership with Megaport and Equinix, we are capable of establishing high-bandwidth connectivity to virtual private clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
Establishing high-bandwidth connectivity to virtual private clouds with Megaport and Equinix.

This is achieved in a matter of minutes via Direct Connect and Express route:

◉ Automatic discovery of the VPC/VNET and other cloud assets is based on the account credentials of the user.

◉ The user selects which private network and cloud network need to be connected. The automation engine takes care of all system activities, including:

     ◉ Automated deployment of the required gateways in the cloud and partner infrastructure to build the fabric.
     ◉ Provisioning of Cloud Gateways (TGW, Virtual Network Gateway) and setup of the route peering relationship between the SDWAN fabric and the cloud network.
     ◉ Programming of the desired policies, including security across cloud and customer sites.

◉ The system automatically starts monitoring the new network setup and its components. It also extends assurance capabilities to the network admin for an optimal user experience and network connectivity.

The figure below shows the high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco Prep, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Tutorial and Material
The high-level network architecture used to achieve seamless connectivity between customer sites and clouds.

Cisco SD-WAN: The Bridge to SaaS


Access to applications is frequently over the public Internet. This can cause significant variation in the user experience based on their location and the specific site serving the SaaS application. Cisco SD-WAN has additional intelligence that enables it to discover all the different sites that serve a SaaS application. This enables the system to identify the optimal path to these sites based on the user location. Path selection is dynamic and takes into account the current status of each site’s connectivity. While direct internet access often provides the best experience, if the closest site is saturated or overloaded, then the system can select an alternate site to more efficiently serve the application to the user over the SD-WAN fabric.

To provide optimal connectivity and the best application experience possible in our Cloud and SaaS era, a flexible, intelligent, and programmable network is required. The network plays a crucial role in providing the bridge to a delightful application experience that business professionals and consumers have been accustomed to and are now demanding 100% of the time.

In this blog we have discussed how a flexible and programmable network provides seamless connectivity to the hybrid cloud and to SaaS. In addition, we have shared how we have solved the challenges of network disparity across clouds by providing key integration, abstraction, and automation of cloud and networks via Cisco SD-WAN. In the next blog in this series, we will explore Quality of Experience and how it can be best served by an intelligent SD-WAN network.

Source: cisco.com

Tuesday 25 May 2021

The Best 3 New Tools We’ve Added to the DevNet Tech Stack

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning

Cisco DevNet is the platform on which we host Cisco’s constantly-updated API documentation, 1,100+ Code Exchange entries, 54 DevNet Sandboxes, and 740 Learning Labs. Here at DevNet, we believe in practicing what we teach. So our platform is built on an evolving, modern hybrid infrastructure. We’ve learned a lot in the building, and refining, of this ambitious collection of services.

DevNet is built on over 100 on-prem and cloud-based microservices, updated in real-time by a team of engineers, designers, and content creators.  Of the tools we use to run this site, there are a few new ones that are proving extremely useful. Here are the top 3 new tools we use, and some interesting things we’ve learned about them.

It was not always so spiffy

As I said, DevNet is based on an evolving architecture. But when we launched the resource, in 2014, it was all hosted in virtual machines on a server in a data center. That system ran out of headroom quickly during our growth years. It wasn’t scaling to keep up with user growth, and it was increasingly hard for us to add features to the service.

In 2017, we moved DevNet over to a cloud-hosted microservice architecture using Kubernetes. We now use Prometheus, Elastic Search, Kibana, Grafana, and other cloud-native platforms. With the flexibility inherent in this setup, and using a DevOps development model, we have found it much easier to add new services and applications to DevNet, as well as to add capacity to keep up with increasing user demand.

Recently, we’ve been driving to enhance application performance and user experience on DevNet. That means working with performance monitoring, full-stack observability, and other solutions across the stack. So I want to focus on tools that make this easier, and what we’ve learned from them.

ThousandEyes

Developer.cisco.com serves a global community, and the performance and experience can vary a lot based on where the users are. Anyone who is responsible for a large-scale global deployment knows that localized performance issues can occur due to service outages, localized connectivity issues, or countless other issues. We use ThousandEyes Cloud Agents, which give us visibility into the underlying network paths our applications use, from our users’ endpoints to our hosted applications in the cloud. Cloud Agents allow us to understand the user experience in specific global regions.

It’s important to us that ThousandEyes’ “synthetic” testing is not at all theoretical. It’s based on real-time internet monitoring and is extremely valuable to us.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
ThousandEyes Path Visualization View for DevNet. This shows the network paths discovered when ThousandEyes agents visit the DevNet application page from different locations around the world.

We also use ThousandEyes synthetic transaction tests to automate UI testing, a process that would otherwise be repetitive and manual. ThousandEyes has a Recorder and IDE we can use to record users’ browser interactions, which then creates Transaction tests that integrate into a CI/CD pipeline.  As we push changes into our repo, ThousandEyes can execute tests continuously. And this continuous UI testing happens in the global regions users connect from.

ThousandEyes solves very practical problems for us and we continue to see new ways to apply it. For example, ThousandEyes alerts can be pushed into the AppDynamics interface. This consolidates network and application performance issues into one pane of glass, allowing NetOps and AppOps to break silos in quickly identifying the root cause of issues.

AppDynamics (“AppD”)


To optimize user experience, we need to know where our applications can be performance-optimized. Previously, we were using Prometheus with a custom Grafana dashboard to monitor our Kubernetes cluster – including each microservice’s usage of CPU, Memory, and other resources. But this combination did not provide the performance data that we really needed to get insight into our applications to improve them.

Today, with Cisco AppDynamics, we get extremely granular and valuable data, like the response time from individual API calls, or the time for full business transactions that employ microservices  and databases – for example, delivering educational content that calls our Learning Lab services, our video system, our quiz engine, our progress tracking microservice and our MySQL server. AppD gives us monitoring from the end user experience of our applications down into the infrastructure in the data center.

This level of intelligence is incredibly valuable. It gives us the opportunity to optimize the responsiveness of user experience by focusing our efforts in precisely the right places.

AppD offers different ways to meter and monitor various parts of the application stack. For example, we use its Javascript agent to monitor the browser application UI. We also use AppD’s SDK for Go for most of our microservices, which are written in Golang. We use a few different database services, like Cassandra and MongoDB; AppD has agents for those processes.

With this suite of monitoring functions, we can monitor end-to-end application performance, which is incredibly valuable to us. It helps us pinpoint performance issues quickly, and makes it easier to create new high-performing services.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Career, Cisco Learning
DevNet Application Performance Monitoring Using AppD. This diagram shows how DevNet leverages AppD for full performance monitoring from the frontend, to the backend and database.

Istio Service Mesh


Last year, we started tested Istio in our non-production environment. We have been using it to manage the microservices our platform is built on. We found its benefits compelling: it helps us manage, balance, and troubleshoot our increasingly complex web of software.

However, it also takes time for people to learn to use this tool. And it’s a high-stakes game, as misconfigurations in a service mesh can have profound impacts on networks and applications.

But we have found that it is worth the time investment in learning to use the tool. Istio has become one of our key diagnostic and management tools. We use it to analyze the impact on inter-service traffic from what can be subtle changes in microservice configurations. It allows us to apply automation to shift workloads when needed – securely and without having to rebuild the connections between services.

Istio also allows us to configure policy-based access control across microservices, again under programmatic control.

There is one pilot in particular that has a lot of promise: We are using Istio in A/B testing, to dynamically divide traffic in various proportions between stable and new apps and configurations. It makes it configuring and running tests much easier and efficient, and thus is helping us improve the pace of our application development. It frees up engineers so they can get to work on other projects more quickly.

Source: cisco.com