Saturday, 20 February 2021

Introduction to Terraform with ACI – Part 4

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

If you haven’t already seen the Introduction to Terraform post, please have a read through. This section will cover the Terraform Remote Backend using Terraform Cloud.​​​​​​​

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci-remote-backend.git

For explanation of the Terraform files see the following post. The backend.tf file will be added in the current post.

Lab Infrastructure

You may already have your own ACI lab to follow along with however if you don’t you might want to use the ACI Simulator in the Devnet Sandbox.

ACI Simulator AlwaysOn – V4

Terraform Backends

An important part of using Terraform is understanding where and how state is managed. In the first section Terraform was installed on my laptop when running the init, plan, and apply commands. A state file (terraform.tfstate) was also created in the folder in which I ran the commands. ​​​​​​​

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

It’s fine when learning and testing concepts however does not typically work well in shared/production environment. What happens if my colleagues also want to run these commands? Do they have their own separate state file?​​​​​​​

These questions can be answered with the concept of the Terraform Backend.

“A backend in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.


Here are some of the benefits of backends:

◉ Working in a team: Backends can store their state remotely and protect that state with locks to prevent corruption. Some backends such as Terraform Cloud even automatically store a history of all state revisions.

◉ Keeping sensitive information off disk: State is retrieved from backends on demand and only stored in memory. If you’re using a backend such as Amazon S3, the only location the state ever is persisted is in S3.

◉ Remote operations: For larger infrastructures or certain changes, terraform apply can take a long, long time. Some backends support remote operations which enable the operation to execute remotely. You can then turn off your computer and your operation will still complete. Paired with remote state storage and locking above, this also helps in team environments.”


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

As you can see from the Terraform documentation, there are many backend options to choose from.

In this post we’ll setup the Terraform Cloud Remote backend.


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

We will use the same Terraform configuration files as we saw in the previous posts, with the addition of the “backend.tf “ file. See the code examples above for a post explaining the various files.

For this example you will need to create a free account on the Terraform Cloud platform

◉ Create a new organization and provide it a name

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create a new CLI Driven workspace

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Once created, navigate to the “General” page under “Settings”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Change the “Execution Mode” to “Local”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

You have two options with Terraform Cloud

◉ Remote Execution – Let Terraform Cloud maintain the state and run the plan and apply commands

◉ Local Execution – Let Terraform Cloud main the state but you run the plan and apply  commands on your local machine

In order to have Terraform Cloud run the commands you will either need public access to the endpoints or run an agent in your environment (similar to Intersight Assist configuring on premises devices)

Agents are available as part of the Terraform Cloud business plan. For the purposes of this post Terraform Cloud will manage the state while we will run the commands locally.

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate back to the production workspace and you should see that the queue and variables tabs have been removed.

◉ Copy the example Terraform code and update the backend.tf file (the Terraform files can be found in the Github repo above)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate to the Settings link at the top of the page and then API Tokens

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create an authentication token
◉ Copy the token
◉ On your local machine create a file (if it doesn’t already exist) in the home directory with the name .terraformrc
◉ Add the credentials/token information that was just create for your organization. Here is an example

CONMURPH:~$ cat ~/.terraformrc
credentials "app.terraform.io" {
  token="<ENTER THE TOKEN HERE> "
}

◉ You should now have the example Terraform files from the Github repo above, an updated backend.tf file with your organization/workspace, and a .terraformrc file with the token to access this organization
◉ Navigate to the folder containing the example Terraform files and your backend.tf file
◉ Run the terraform init command. If everything is correct you should see the remote backend initialised and the ACI plugin installed

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Run the terraform plan and terraform apply commands to apply to configuration changes.
◉ Once complete, if the apply is successful have a look at your Terraform Cloud organization.
◉ In the States tab you should now see the first version of your state file. When you look through this file you’ll see it’s exactly the same as the one you previously had on your local machine, however now it’s under the control of Terraform Cloud​​​​​​​
◉ Finally, if you want to collaborate with your colleagues, you can all run the commands locally and have Terraform Cloud manage a single state file. (May need to investigate locking depending on how you are managing the environment)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Source: cisco.com

Friday, 19 February 2021

Introduction to Terraform with ACI – Part 3

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

If you haven’t already seen the previous Introduction to Terraform posts, please have a read through. This “Part 3” will provide an explanation of the various configuration files you’ll see in the Terraform demo.

Introduction to Terraform

Terraform and ACI

Code Example

https://github.com/conmurphy/terraform-aci-testing/tree/master/terraform

Configuration Files

You could split out the Terraform backend config from the ACI config however in this demo it has been consolidated. 

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

config-app.tf

The name “myWebsite” in this example refers to the Terraform instance name of the “aci_application_profile” resource. 

The Application Profile name that will be configured in ACI is “my_website“. 

When referencing one Terraform resource from another, use the Terraform instance name (i.e. “myWebsite“).

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

config.tf

Only the key (name of the Terraform state file) has been statically configured in the S3 backend configuration. The bucket, region, access key, and secret key would be passed through the command line arguments when running the “terraform init” command. See the following for more detail on the various options to set these arguments.


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

terraform.tfvars

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

variables.tf

We need to define the variables that Terraform will use in the configuration. Here are the options to provide values for these variables:

◉ Provide a default value in the variable definition below
◉ Configure the “terraform.tfvars” file with default values as previously shown
◉ Provide the variable values as part of the command line input

$terraform apply –var ’tenant_name=tenant-01’

◉ Use environmental variables starting with “TF_VAR“

$export TF_VAR_tenant_name=tenant-01

◉ Provide no default value in which case Terraform will prompt for an input when a plan or apply command runs

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

versions.tf

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

Source: cisco.com

Thursday, 18 February 2021

Win with Cisco ACI and F5 BIG-IP – Deployment Best Practices

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Applications environments have different and unique needs on how traffic is to be handled. Some applications, due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP address of the client making the request to the application.

Now, when the request comes to the F5 BIG-IP, it has the option to change the real IP address of the request or to keep it intact. To keep it intact, the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’.

As simple as it may sound to just toggle a setting on the F5 BIG-IP, a change of this setting causes significant change in traffic flow behavior.

Let us take an example with some actual values. Starting with a simple setup of a standalone F5 BIG-IP with one interface on the F5 BIG-IP for all traffic (one-arm)

◉ Client – 10.168.56.30

◉ BIG-IP Virtual IP – 10.168.57.11

◉ BIG-IP Self IP – 10.168.57.10

◉ Server – 192.168.56.30

Scenario 1: With SNAT

From Client: Src: 10.168.56.30           Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.57.10 (Self-IP)     Dest: 192.168.56.30

In above scenario, the server will respond back to 10.168.57.10 and F5 BIG-IP will take care of forwarding the traffic back to the client. Here, the application server has visibility to the Self-IP 10.168.57.10 and not the client IP. 

Scenario 2: No SNAT

From Client: Src: 10.168.56.30           Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.56.30       Dest: 192.168.56.30

In this scenario, the server will respond back to 10.168.56.30 and here is where comes in the complication, as the return traffic needs to go back to the F5and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason.  Policy based redirect will help here. The default gateway of the server will point to the ACI fabric, and the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP.

With this, the advantage of using PBR is two-fold

◉ The server(s) default gateway does not need to point to F5 BIG-IP, but can point to the ACI fabric

◉ The real client IP is preserved for the entire traffic flow

◉ Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic. If server originated traffic volume is high, it could result unnecessary load the F5 BIG-IP

Before we get deeper into the topic of PBR below are a few links to help you refresh on some of the Cisco ACI and F5 BIG-IP concepts

◉ Cisco ACI fundamentals

◉ SNAT and Automap

◉ F5 BIG-IP modes of deployment

Now let us look at what it takes to configure PBR using a Standalone F5 BIG-IP Virtual Edition in One-Arm mode.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

To use the PBR feature on APIC – Service graph is a MUST



Configuration on APIC


1) Bridge domain ‘F5-BD’

◉ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy
◉ IP Data plane learning – Disabled

2) L4-L7 Policy-Based Redirect

◉ Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one
◉ Name: ‘bigip-pbr-policy’
◉ L3 destinations: F5 BIG-IP Self-IP and MAC
◉ IP: 10.168.57.10
◉ MAC: Find the MAC of interface the above Self-IP is assigned from logging into the F5 BIG-IP (example: 00:50:56:AC:D2:81)

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device

◉ Managed – unchecked
◉ Name: ‘pbr-demo-bigip-ve`
◉ Service Type: ADC
◉ Device Type: Virtual (in this example)
◉ VMM domain (choose the appropriate VMM domain)
◉ Devices: Add the F5 BIG-IP VM from the dropdown and assign it an interface
◉ Name: ‘1_1’, VNIC: ‘Network Adaptor 2’
◉ Cluster interfaces
◉ Name: consumer, Concrete interface Device1/[1_1]
◉ Name: provider, Concrete interface: Device1/[1_1]

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

4) Service graph template

◉ Under Tenant->Services->L4-L7->Service graph templates, create a service graph template
◉ Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph
◉ ADC: one-arm
◉ Route redirect: true

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows:

◉ Direct connect – True
◉ Adjacency type – L3

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

6) Apply the service graph template

◉ Right click on the service graph and apply the service graph
◉ Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract
◉ For the connector
◉ BD: ‘F5-BD’
◉ L3 destination – checked
◉ Redirect policy – ‘bigip-pbr-policy’
◉ Cluster interface – ‘provider’

Once the service graph is deployed, it is in applied state and the network path between the consumer, F5 BIG-IP and provider has been successfully setup on the APIC

Configuration on BIG-IP


1) VLAN/Self-IP/Default route

◉ Default route – 10.168.57.1
◉ Self-IP – 10.168.57.10
◉ VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter

2) Nodes/Pool/VIP

◉ VIP – 10.168.57.11
◉ Source address translation on VIP: None

3) iRule (end of the article) that can be helpful for debugging

Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair



2) APIC: Logical device cluster

◉ Promiscuous mode – enabled

◉ Add both BIG-IP devices as part of the cluster

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

3) APIC: L4-L7 Policy-Based Redirect

◉ L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade

Configuration is complete, let’s look at the traffic flows.

Client-> F5 BIG-IP -> Server

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Server-> F5 BIG-IP -> Client


In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

iRule to help with debugging on the BIG-IP

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED>

Scenario 1: No SNAT -> Client IP is preserved


Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

If you are curious of the iRule output if SNAT is enabled on the BIG-IP – Enable AutoMap on the virtual server on the BIG-IP

Scenario 2: With SNAT -> Client IP not preserved.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Tuesday, 16 February 2021

For Banks – The Contact Center is Your Best Friend

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material, Cisco Learning

For years, the album that sold the most units was Carole King’s “Tapestry”. Estimates are that this record has sold more than 25 million copies. Rife with well-known songs, an interesting comment made by one of the initial reviewers in 1971 called the song “You’ve Got a Friend” the “core” of and “essence” of the album. It didn’t hurt that James Taylor’s version also became a monster hit. For banks, they too have a friend – in their contact centers.

The malls emptied, and the contact centers filled up

The last twelve months have initiated a renaissance in contact center operations. While the modernization of contact centers had been on a steady march, the realities of 2020 suddenly presented a giant forcing function changing the customer engagement landscape in a dramatic fashion. In one fell swoop, 36 months of planned investment in modernizing contact centers accelerated into a single 12-month period. As the physical world was shut down, the digital world ramped up dramatically. Banks saw branch visits slow to a crawl, and digital and contact center interactions increased by orders of magnitude. In addition, up to 90% of contact center agents were sent home to work, with estimates that a majority of them will stay there over time as indicated by this Saddletree Research analysis:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material, Cisco Learning

Prior planning prevented poor performance


Fortunately, banks and credit unions were one of the key vertical markets that were relatively prepared for 2020 and were able to lean into the challenges presented, though this was not to say things went perfectly. What was behind this preparation and what were these organizations doing prior and during the crisis? And what should they do in the years ahead?

The “Digital Pivot” paid huge dividends


At their core, banks and credit unions collect deposits and loan them out at (hopefully) a profit. With money viewed as a commodity, financial services firms were one of the first industries to understand the only two sustainable differentiators they possessed were the customer experience they delivered, and their people. It is interesting that these are the main two ingredients which comprise a contact center!

For many banks prior to 2010, the biggest challenge for contact center operations consisted of navigating mergers and acquisitions when combining operations. Normalizing operations during mergers often manifested itself in a giant IVR farms meant to absorb large amounts of voice traffic. Prior form factors for self-service were not know as “low-effort” propositions, and customer experience scores suffered for years. Banks as an aggregate industry dropped below all industry averages for customer experience, after leading for years.

The mobile revolution presented a giant reset for banking customer experience. Financial institutions by and large have done an excellent job of adopting mobile applications to the delight of their customers. In response, customer experience scores in banking have steadily risen the past 10 years, and banks are near the top quartile again, only trailing consumer electronics firms and various retailers.

Banks are more like a contact center than you think


Banks and contact centers have very common characteristics. Both wrap themselves in consumer-friendly self-service applications which automate formerly manual processes that required human assistance. These include popular customer engagement platforms such as mobile applications and ATMs. In the contact center this dynamic involves speech recognition, voice biometrics, and intelligent messaging.

As self-service has become increasingly popular, live interactions that are left over for both the branch and the contact center have become more complex, difficult to solve on the first try, and requiring collaborative, cross business resolution by the individual servicing the customer. These types of interactions are known as “outliers”. In this situation the contact center becomes in essence, a “digital backstop” where the consumer interacts with self service first and then and only then seeks live assistance.

Prior planning prevents poor performance part II


The digital tsunami started in 2010 via the mass adoption of mobile applications by banks, giving this industry in particular a significant head start on the “outlier” dynamic. Therefore in 2020 when the shopping malls emptied out and contact centers filled up, banks had already been operating tacitly in the “outlier” model for a number of years and were in a better position to succeed. Applications such as intelligent call back, integrated consumer messaging, work at home agents, voice biometrics, A.I. driven intelligent chat bots, and seamless channel shift from mobile applications to the contact center were already in place to some extent for leading financial institutions.

Thinking ahead


With much of the focus on contact center, automation in banking has been able to extend A.I. into the initial stages of customer contact. The road ahead will include wrapping A.I. driven intelligence to surround contact center resources during an interaction, essentially creating a new category of resources known as “Super Agents”. In this environment, all agents in theory can perform as the best agents because learnings from the best performers are automatically applied throughout the workforce. In addition, Intelligent Virtual Assistants, or IVAs, will act as “digital twins” for contact center agents – automatically looking for preemptive answers to customers questions, and automating both contact transcripts and after call work documentation and follow up.

Yes, if you’re a bank, you have a friend in your contact center


Banks made the pivot to delivering better customer experience in their contact centers during the “Digital Pivot” in the early 2010s. From there, banks made steady progress to reclaim their CX leadership and delivering excellent customer experiences. The realities of 2020 accelerated contact center investment by at least 36 months into a 12-month window. Banks which had established leadership utilized this forcing function to accelerate a next generation of customer differentiators, firmly entrenched in themselves as category leaders in the financial services industry. Other institutions can utilize these unique times to play rapid catch-up. Who benefits? Their customers.

Source: cisco.com

Friday, 12 February 2021

Cloud-based Solutions can Empower Financial Services Companies to Adapt While Cutting Costs

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Preparation, Cisco Career

IT professionals in financial services have been instrumental to ensuring the integrity of global financial markets over the last year. Their hard work has helped keep the world’s largest economies working and financial aid flowing to those who need it most.

For them, few things remain unchanged from the pre-COVID world. Many network engineers had their hands full supporting large scale migrations to remote working. But aside from that, one constant during this time of change is that IT budgets are not increasing. “Do more with less,” “Reduce costs,” and “Extract more value,” are a few common mantras. The message is clear—each dollar spent on IT projects must have a tangible business benefit associated with it. With this increased focus on efficiency and cost, now is the perfect time for financial services companies to consider investing in cloud-based IT.

Benefits of cloud-based IT

Migrating IT infrastructure to a cloud-based platform can help improve efficiency and reduce costs for finserv companies by accelerating business processes, simplifying technology, and boosting operational efficiency. Today’s reality has required businesses to rethink how to help their employees collaborate safely while working from remote locations as they begin the return to work. By leveraging cloud-based solutions, workers and IT support teams are able to troubleshoot issues quicker, reduce downtime, and lower costs both for employees and for the end-customer.  

Supporting rapid change 

Before COVID, financial services companies were embarking on their cloud journey in pockets, with the primary focus on software development environments and connections to provide staff with secure connectivity. The rapid changes required for companies to function during the early days of the pandemic necessitated quick adoption of cloud-based technologies for enterprise voice, contact centers, remote access and network security. Projects that would have taken weeks or months were now being done in hours or days, driven by a need to get lines-of-business operational and keep companies viable. Now that the industry has successfully dealt with the crises of 2020, and have been operating in the new normal for several months now, a few trends have emerged that will drive IT decisions going forward— including preparing for a return to work and facilitating future growth.

Preparing for return to work

While bank branches never closed, most campuses and offices did. Optimistic news around vaccine development and distribution has led many companies to prepare for the return to work and reconsider the landscape for the office environment.  

For example, adding cameras could help ensure compliance around masks and social distancing policies. Access sensors could help track room occupancy and ensure timely and consistent sanitation practices. In a traditional environment, implementing such practices could take up to a year. However, by taking advantage of the ability to configure a network and add components to that network without configuration of individual components, we can continue to meet the accelerated timelines required for the return to work.

Scaling for the future

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Preparation, Cisco Career
Traditional companies deal with mergers and acquisitions, but for financial services companies, growth is typically purchased. Network teams are not revenue generators, and as a result, mergers have historically been underfunded and understaffed. The inevitable outcome of years or decades of that reality is a patchwork quilt of networks that are all sort-of connected. Each legacy organization retains some idiosyncrasies, issues, and non-standard hardware that requires specialized support personnel. That complexity leads to lower velocity than what lines-of-business have come to expect throughout the pandemic.  With everything needed to deploy a branch, campus, or office network, cloud adoption takes advantage of the appetite that company departments have developed for speed. This emphasizes the critical need to scale for the future growth of financial services companies and the need for simplicity.  

All in all, the events of 2020 have been a catalyst for change and digital transformation within the financial services sector. Cisco Meraki offers solutions to address the challenges that come with such abrupt changes including facilitating the campus and client network, creating operational efficiencies, and reducing downtime and loss of revenue.

Source: cisco.com

Thursday, 11 February 2021

Cisco introduces Fastlane+ with advanced multi user scheduling to revolutionize real-time application experience

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco and Apple continue to work together to deliver better experiences for customers through collaboration and co-development. Our latest project, Fastlane+, builds on the popular Fastlane feature by adding Advanced Scheduling Request to take QoS management a step further by scheduling and carving out airtime for voice and video traffic on Wi-Fi 6 capable iPhone and iPad devices. This facilitates a superior experience with latency-sensitive collaboration applications such as WebEx and FaceTime.

What is FastLane+, and why do we need it?

First and foremost, let’s take a look at the motivation behind Fastlane+. The 802.11ax standard introduced OFDMA and MU-MIMO as uplink transmission modes to allow scheduled access-based uplink transmissions. This allows the access point (AP) to dynamically schedule uplink OFDMA or MU-MIMO based on the client’s uplink traffic type and queue depth. This decision is made on a per Access Category basis and at the start of every Transmit opportunity (TXOP) with OFDMA used for latency centric low bandwidth applications. In contrast, MU-MIMO is used when higher bandwidth is required.

With Fastlane+, the Cisco AP learns the client’s uplink buffer status using a periodic trigger mechanism known as Buffer Status Report Poll (BSRP). Nevertheless, the client devices may not be able to communicate their buffer status to the AP in a timely manner due to MU EDCA channel access restrictions and possible scheduling delays in dense environments. Additionally, the AP may not always be able to allocate adequate resource units that fulfill application requirements. Because of this, a better approximation of uplink buffer status is critical for efficient uplink scheduling.

Next, let’s compare 802.11ax standards-based approaches for uplink scheduling- UL OFDMA and Target Wakeup Time (TWT). As highlighted in the chart below, with UL OFDMA, the AP has absolute control over uplink scheduling, while in the case of TWT, the client can pre-negotiate TWT service periods. A compromise thus needs to be made between the AP and client to improve uplink scheduling efficiency in a dense RF environment with latency-sensitive traffic.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Fastlane+ is designed to approximate better the client’s buffer status based on application requirements indicated by the client. This estimation policy significantly reduces BSRP polling overhead as compared to the default BSR based UL OFDMA scheduling. Along with obtaining key parameters for active voice and video sessions to improve uplink scheduling efficiency, Fastlane+ also solicits periodic scheduling feedback from the clients.

In a nutshell, Fastlane+ enhances the user experience for latency-sensitive voice and video applications in a high-density user environment by improving the effectiveness of estimating the uplink buffer status for the supported 802.11ax clients.

Key considerations for Fastlane+


Fastlane+ is initiated for latency-sensitive voice and video applications like WebEx, FaceTime, and others, whose traffic characteristics can be better approximated. Fastlane+ is indicated in DEO IE by the AP and Advanced Scheduling Request (ASR) specific information from the clients, including ASR capability, ASR session parameters, and ASR statistics. This information is sent using Vendor-Specific Action frames that are protected using PMF (protected management frame).

Latency becomes a concern only when there is enough contention in the medium due to high channel utilization. Consequently, Fastlane+ based uplink TXOPs are allocated only when the channel utilization is higher than 50%.

System overview for Fastlane+


The diagram below shows a bird’s-eye view of an end-to-end system to support Fastlane+. Fastlane+ specific configurations can be managed from the controller’s GUI and CLI. Uplink Latency statistics provided by the clients to the AP are also displayed on the controller. These latency statistics are on a per client basis and triggered with/without an active ASR session.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Fastlane+ benefits:


To better understand the benefits of Fastlane+, let’s first define key performance indicators of a typical voice and video application. Mean opinion score (MOS) is a standard measure for quality of experience for voice applications. It is quantified on a scale of 1 – 5, with 5 being the highest and 1 lowest. To put things in perspective, 3.5 is the minimum requirement for service provider grade quality.

For measuring video quality, we use the Delay factor. This evaluates the size of the jitter buffer to eliminate the video interruptions due to network jitter. The lower the delay factor (in milliseconds), the better the video quality.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Test considerations:


Results below are from a typical collaboration application with simulation tests performed under a high channel utilization and controlled RF environment. 16 numbers of Wi-Fi 6 capable iPhone in 80Mhz bandwidth were used.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Adios to choppy voice and video calls


With Fastlane+, you get a better Wi-Fi experience when you are collaborating with friends and colleagues. It doesn’t’t matter if you are in highly congested RF environments such as schools, offices, high-density housing, shopping malls, airports, or stadiums; Fastlane+ has you covered. So, when we’re all ready to come back, the network will be ready and waiting.

Fastlane+ is enabled by default on 802.11ax capable iPhone and iPad devices running iOS 14 or later. On the infrastructure side, it is currently supported on the Cisco Catalyst 9130 Access point. On AireOS WLC platforms, the 8.10 MR4 (8.10.142.0) release has CLI based support of the feature. On Catalyst 9800 Series WLC platforms, the 17.4.1 release has CLI and GUI (client data monitoring) support. Whereas, configuration tab in GUI will be in later releases. Please note, the Fastlane+ feature is listed as “Advanced Scheduling Request” in the CLI and GUI.

Wednesday, 10 February 2021

Visualize, validate policy and increase remote worker telemetry with Network Analytics Release 7.3.1

We have heard it before. Securing your organization isn’t getting any easier. The remote workforce is expanding the attack surface. We need context from users and endpoints to control proper access, and IT teams need to ensure our data stores are resilient and always available to gain the telemetry they need to reduce risk. Yes, zero trust is a great approach, but network segmentation in the workplace is hard, and it can shut down critical business functions if not deployed correctly.

To answer these challenges, we are excited to announce new features in Cisco Secure Network Analytics (formerly Stealthwatch). In 7.3.1, we are introducing TrustSec-based visualizations that allow network operations and security teams to instantly validate the intent of policies. This is a big jump that provides organizations the visibility required to confidently embrace network segmentation, a critical component of the zero-trust workplace.

To answer the remote work challenge, the Cisco Secure Network Analytics team has simplified how customers obtain user and endpoint context from AnyConnect. And to ensure the expanded attack surface doesn’t increase risk, Secure Network Analytics has advanced its integration with Cisco Talos, one of the largest threat intelligence teams in the world. But there is more; read on to learn how we virtualized the Data Store to simplify how organizations big and small ensure resiliency and manage the growing volumes of data required to stay a step ahead in the arms race that is network security.

TrustSec Analytics reports offer new ways to visualize group communications between SGTs

Secure Network Analytics’ TrustSec Analytics reporting capability leverages the Report Builder application and its integration with Cisco Identity Services Engine (ISE) to automatically generate reports that map communications between Security Group Tags (SGTs) to provide users with unprecedented visibility into all communications across different groups within their environment. For security teams that want to adopt a group-based policy management program to build network segmentation but lack the resources to pursue one, TrustSec Analytics reporting lowers the entry point to doing so. Now any Secure Network Analytics user can effortlessly visualize, analyze, drill down into any inter-group communication, adopt the right policies, and adapt them to their environment’s needs.

Figure 1. A TrustSec Analytics report generated in Secure Network Analytics that displays volumetric communications between different SGTs that have been assigned and pulled directly from ISE.

Streamline policy violation investigations with TrustSec Policy Analytics reports


TrustSec Policy Analytics reports can also be generated to assess whether policies are being violated. By clicking on any cell in the report, users can gain insights into the volume of data being sent between any two groups, how that data is being distributed, the protocols being used, what ports they are operating on, and more.

Additionally, when it comes to the typically lengthy processes associated with determining a policy violation’s root cause, the capabilities offered by the TrustSec Policy Analytics report quite literally enable users to find the proverbial ‘offending-flow needles’ in their vast ‘network haystacks’. Rather than performing hours of cumbersome tasks such as conducting manual searches and cross-references across different datasets, users can get granular by drilling down into policy violations to view all associated IPs and related flows, associated endpoints, ISE-registered usernames, and events with timestamps on single pane. This effectively enables users to streamline their root cause analysis efforts and expedite their ability to diagnose why a policy violation occurred.

Figure 2. A TrustSec Policy Analytics report generated in Secure Network Analytics with intuitive color-coded cells and labels that indicate whether communications between different SGTs are violating a policy and require further investigation.

Increased Remote Worker Telemetry


Amidst the recent explosion of people working from home, organizations face new challenges related to monitoring and securing their remote workforces as they connect back to the network from anywhere and on anything.

Secure Network Analytics has made endpoint Network Visibility Module (NVM) data the primary telemetry source to meet these challenges, effectively eliminating the need for NetFlow to gain user and device context. Customers are gaining the following benefits:

◉ Simplified remote worker monitoring with endpoint NVM data becoming a primary telemetry source

◉ More efficient remote worker telemetry monitoring by collecting and storing on-network NVM endpoint records without the need for NetFlow

◉ Increased Endpoint Concentrator ingestion bandwidth to support up to 60K FPS

◉ NVM driven custom alerting and endpoint flow context

Figure 3. Examples of NVM driven custom alerting and endpoint flow context within the Secure Network Analytics Manager.

Introducing the Secure Network Analytics Virtual Data Store!


The Secure Network Analytics Data Store is now supported as a virtual appliance offering. Similar to the Data Store that was introduced in 7.3.0, the virtual Data Store offers a new and improved database architecture design for Secure Network Analytics that enables new ways of storing and interacting with data more efficiently. A virtual Data Store supports a 3-node database cluster with flow ingest from virtual Flow Collectors.  This new architecture decouples ingest from data storage to offer the following benefits:

◉ Query and reporting response times improved by a significant (10x faster!) magnitude

◉ Scalable and long-term telemetry storage capabilities with no need for additional Flow Collectors

◉ Enterprise-class data resiliency to allow for seamless data availability during virtual machine failures

◉ Increased data ingest capacity of up to 220K flows per second (FPS)

◉ Flexible deployment options – as a fully virtualized appliance, the Virtual Data Store does not require additional rack space and can be rapidly deployed using your existing infrastructure

Enhanced security analytics


As threats continue to evolve, so do the analytical capabilities of Secure Network Analytics to deliver fast and high-fidelity threat detections. The cloud-based machine learning engine has been updated to include:

◉ System alarms have been ported to appear as notifications in the Web UI

◉ Brand new confirmed threat detections related to ransomware, remote access trojans (RAT) and malware distribution

Figure 4. New confirmed ransomware, remote access trojan (RAT) and malware distribution-related threat detections.