Monday, 22 February 2021

The Best Kept Secret in Mobile Networks: A Million Saved is a Million Earned

Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guides

I’m about to let you in on a little secret, actually it’s a big secret. What if I told you that you could save millions – even tens of millions – of dollars in two hours or less? If you were the CFO of a Mobile Operator, would this get your attention? What if I told you that the larger and busier your mobile network is, the more money you could save? What if the savings were in the hundreds of millions or even billions of dollars?

Now that I have your attention, let me tell you a bit about the Cisco Ultra Traffic Optimization (CUTO) solution. I could tell you that this is a vendor-agnostic solution for both the RAN and the mobile packet core. I could tell you how CUTO uses machine learning algorithms or about proactive cross-traffic contention detection. I could tell you about elephant flows and how the CUTO software optimizes the packet scheduler in RAN networks. I could write a whole blog on the CUTO technology and how it works, but I won’t. I will leave out the technical details for another time to share.

What I want to share today are two important facts about our CUTO solution:

1) We have helped multiple operators install and turn on CUTO, networkwide in less than 2 hours.

2) Real world deployments are demonstrating material savings, including a recent trial with a large Tier 1 operator that resulted in a calculated savings of several billions of dollars.

Out of all the segments of the network that operators are investing in, spectrum and RAN tend to be a top priority, both from a CAPEX and an OPEX standpoint. This is because mobile network operators have thousand’s, if not tens of thousands, of cell towers with accompanying RAN equipment. The amount of equipment required, and the cost of a truck roll per site leads to enormous expenses when you want to upgrade or augment the RAN network. With network traffic growing faster every year, the challenge to stay ahead of demand also grows. Major RAN network augmentations can take months, if not years to complete, especially when you factor in governmental regulations and permitting processes. This is where one aspect of the CUTO solution stands out, its ease and speed of deployment. CUTO’s purpose is to optimize the RAN network and improve the efficiency of the use of the spectrum. Instead of sending an army of service trucks and technicians to each and every cell tower, CUTO is deployed in the Core of the network, which is a very small number of sites (data centers). Making things even easier, CUTO can be deployed on Common Off the Shelf Servers (COTS), or better yet it can be deployed on the existing Network Function Virtual Infrastructure (NFVI) stack already in the mobile network core. Installing and deploying CUTO is as easy as spinning up a few virtual machines. In less than 2 hours, real operators on live networks have managed to install and deploy CUTO networkwide. Service providers talk a lot about MTTD (mean time to detect an issue) and MTTR (mean time to repair it), but with CUTO they can talk about MTTME – mean time to millions earned (saved).

If you look at the present mode of operation (PMO) for most mobile operators, there’s a very typical workflow that occurs in RAN networks. Customer’s consumption of video and an insatiable appetite for bandwidth eventually leads to a capacity trigger in the network, alerting the mobile network operator that they have congestion in a cell site. To handle the alert, operators typically have three options:

1) They might be able to “re-farm” spectrum and transition 3G spectrum to 4G spectrum. If this is an option, it typically leads to about a 40% spectrum gain at the price tag of about 22K in CAPEX per site and with very nominal OPEX costs. This option is relatively quick, simple, and leads to a good capacity improvement.

2) They might be able to deploy a new spectrum band or increase the Antenna Sectorization density. This option leads to about 20%-30% of the capacity gain but comes at a price tag of about 80K in CAPEX per site and about 20K in OPEX. This is a relatively long and costly process for a marginal capacity improvement and finding high quality “beachfront” spectrum is impossible in many markets around the world.

3) If neither option 1 or 2 are possible, the Operator would need to build a new cell site and cell split (tighten reuse). A new site leads to about 80% capacity gain depending on the site’s placement, user distribution, terrain, shadowing, etc., and comes at a cost of roughly 250K in CAPEX and about 65K/year in OPEX. This is an extremely long process (permitting, etc.) and very expensive.

Unfortunately, the spectrum is a finite resource, the opportunity for operators to choose option 1 or 2 is becoming scarce. Just five years ago, about 50% of cell sites with congestion were candidates for option 1, 30% are candidates for option 2, and only 20% required option 3. Five years from now virtually no cell sites will be candidates for option 1, maybe 10% will be candidates for option 2, and over 90% will require the very time-consuming and costly option 3.

CUTO offers mobile operators an alternative, helping to optimize traffic and reduce the congestion in the cell sites, which can significantly reduce the number of sites requiring capacity upgrades. During a recent trial to measure the efficacy of CUTO in the real-world and at scale, we deployed it with a Tier 1 operator. Near immediately, we saw a 15% reduction in the number of cell sites triggering a capacity upgrade due to consistent congestion. Here are some real-world numbers of how we were able to calculate the savings based on that 15% reduction:

The operator in this use case has:

• ~40M subscribers

• 10,000 sites triggering a need for a capacity upgrade

• Annual data traffic growth rate of 25%

The assumptions we agreed to with the operator were:

• Blended Incremental CAPEX/Upgrade (options 1, 2, and 3) = $100K CAPEX

• Blended Incremental OPEX/Site/Year (options 1, 2, and 3) = $20K OPEX/year

Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guides

In this real-world example, you can see that CUTO saves this operator over $1.8B of CAPEX and $837M of OPEX over 5 years. I like to consider that as “adult money.” I recognize that these numbers are enormous and because of that, you may be skeptical. I was skeptical until I saw the results of the real-world trial for myself. I expect there to be plenty of FUD coming from the folks that are at risk of missing out on significant revenue because of your deployment of CUTO. Here’s my answer to those objections:

• Even if you cut these numbers in half or more, my guess is that you are looking at a material impact on your P&L.

• See it firsthand, don’t just take my word for it, ask your local Cisco Account Manager for a trial of CUTO and look at the savings based on your network, your mix of options 1, 2, or 3, and your costing models.

We all know that 5G will be driving new use cases, the need for more bandwidth, and we know that video will only become more ubiquitous. Mobile operators will require more and more cell sites and those sites will continue to fill up in capacity over time. Why not get ahead of the problem, and see what sort of MTTME your organization is capable of when your organization embraces a highly innovative software strategy?

Saturday, 20 February 2021

Introduction to Terraform with ACI – Part 4

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

If you haven’t already seen the Introduction to Terraform post, please have a read through. This section will cover the Terraform Remote Backend using Terraform Cloud.​​​​​​​

1. Introduction to Terraform

2. Terraform and ACI​​​​​​​

3. Explanation of the Terraform configuration files

Code Example

https://github.com/conmurphy/intro-to-terraform-and-aci-remote-backend.git

For explanation of the Terraform files see the following post. The backend.tf file will be added in the current post.

Lab Infrastructure

You may already have your own ACI lab to follow along with however if you don’t you might want to use the ACI Simulator in the Devnet Sandbox.

ACI Simulator AlwaysOn – V4

Terraform Backends

An important part of using Terraform is understanding where and how state is managed. In the first section Terraform was installed on my laptop when running the init, plan, and apply commands. A state file (terraform.tfstate) was also created in the folder in which I ran the commands. ​​​​​​​

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

It’s fine when learning and testing concepts however does not typically work well in shared/production environment. What happens if my colleagues also want to run these commands? Do they have their own separate state file?​​​​​​​

These questions can be answered with the concept of the Terraform Backend.

“A backend in Terraform determines how state is loaded and how an operation such as apply is executed. This abstraction enables non-local file state storage, remote execution, etc.


Here are some of the benefits of backends:

◉ Working in a team: Backends can store their state remotely and protect that state with locks to prevent corruption. Some backends such as Terraform Cloud even automatically store a history of all state revisions.

◉ Keeping sensitive information off disk: State is retrieved from backends on demand and only stored in memory. If you’re using a backend such as Amazon S3, the only location the state ever is persisted is in S3.

◉ Remote operations: For larger infrastructures or certain changes, terraform apply can take a long, long time. Some backends support remote operations which enable the operation to execute remotely. You can then turn off your computer and your operation will still complete. Paired with remote state storage and locking above, this also helps in team environments.”


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

As you can see from the Terraform documentation, there are many backend options to choose from.

In this post we’ll setup the Terraform Cloud Remote backend.


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

We will use the same Terraform configuration files as we saw in the previous posts, with the addition of the “backend.tf “ file. See the code examples above for a post explaining the various files.

For this example you will need to create a free account on the Terraform Cloud platform

◉ Create a new organization and provide it a name

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create a new CLI Driven workspace

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Once created, navigate to the “General” page under “Settings”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Change the “Execution Mode” to “Local”

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

You have two options with Terraform Cloud

◉ Remote Execution – Let Terraform Cloud maintain the state and run the plan and apply commands

◉ Local Execution – Let Terraform Cloud main the state but you run the plan and apply  commands on your local machine

In order to have Terraform Cloud run the commands you will either need public access to the endpoints or run an agent in your environment (similar to Intersight Assist configuring on premises devices)

Agents are available as part of the Terraform Cloud business plan. For the purposes of this post Terraform Cloud will manage the state while we will run the commands locally.

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate back to the production workspace and you should see that the queue and variables tabs have been removed.

◉ Copy the example Terraform code and update the backend.tf file (the Terraform files can be found in the Github repo above)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Navigate to the Settings link at the top of the page and then API Tokens

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Create an authentication token
◉ Copy the token
◉ On your local machine create a file (if it doesn’t already exist) in the home directory with the name .terraformrc
◉ Add the credentials/token information that was just create for your organization. Here is an example

CONMURPH:~$ cat ~/.terraformrc
credentials "app.terraform.io" {
  token="<ENTER THE TOKEN HERE> "
}

◉ You should now have the example Terraform files from the Github repo above, an updated backend.tf file with your organization/workspace, and a .terraformrc file with the token to access this organization
◉ Navigate to the folder containing the example Terraform files and your backend.tf file
◉ Run the terraform init command. If everything is correct you should see the remote backend initialised and the ACI plugin installed

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

◉ Run the terraform plan and terraform apply commands to apply to configuration changes.
◉ Once complete, if the apply is successful have a look at your Terraform Cloud organization.
◉ In the States tab you should now see the first version of your state file. When you look through this file you’ll see it’s exactly the same as the one you previously had on your local machine, however now it’s under the control of Terraform Cloud​​​​​​​
◉ Finally, if you want to collaborate with your colleagues, you can all run the commands locally and have Terraform Cloud manage a single state file. (May need to investigate locking depending on how you are managing the environment)

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Terraform, Cisco Preparation, Cisco Exam Prep, Cisco Learning, Cisco Guides

Source: cisco.com

Friday, 19 February 2021

Introduction to Terraform with ACI – Part 3

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

If you haven’t already seen the previous Introduction to Terraform posts, please have a read through. This “Part 3” will provide an explanation of the various configuration files you’ll see in the Terraform demo.

Introduction to Terraform

Terraform and ACI

Code Example

https://github.com/conmurphy/terraform-aci-testing/tree/master/terraform

Configuration Files

You could split out the Terraform backend config from the ACI config however in this demo it has been consolidated. 

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

config-app.tf

The name “myWebsite” in this example refers to the Terraform instance name of the “aci_application_profile” resource. 

The Application Profile name that will be configured in ACI is “my_website“. 

When referencing one Terraform resource from another, use the Terraform instance name (i.e. “myWebsite“).

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

config.tf

Only the key (name of the Terraform state file) has been statically configured in the S3 backend configuration. The bucket, region, access key, and secret key would be passed through the command line arguments when running the “terraform init” command. See the following for more detail on the various options to set these arguments.


Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

terraform.tfvars

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

variables.tf

We need to define the variables that Terraform will use in the configuration. Here are the options to provide values for these variables:

◉ Provide a default value in the variable definition below
◉ Configure the “terraform.tfvars” file with default values as previously shown
◉ Provide the variable values as part of the command line input

$terraform apply –var ’tenant_name=tenant-01’

◉ Use environmental variables starting with “TF_VAR“

$export TF_VAR_tenant_name=tenant-01

◉ Provide no default value in which case Terraform will prompt for an input when a plan or apply command runs

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

versions.tf

Cisco Developer, Cisco ACI, Cisco DevNet, Cisco Network Automation, Cisco Terraform, Cisco Preparation, Cisco Exam Prep

Source: cisco.com

Thursday, 18 February 2021

Win with Cisco ACI and F5 BIG-IP – Deployment Best Practices

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Applications environments have different and unique needs on how traffic is to be handled. Some applications, due to the nature of their functionality or maybe due to a business need do require that the application server(s) are able to view the real IP address of the client making the request to the application.

Now, when the request comes to the F5 BIG-IP, it has the option to change the real IP address of the request or to keep it intact. To keep it intact, the setting on the F5 BIG-IP ‘Source Address Translation’ is set to ‘None’.

As simple as it may sound to just toggle a setting on the F5 BIG-IP, a change of this setting causes significant change in traffic flow behavior.

Let us take an example with some actual values. Starting with a simple setup of a standalone F5 BIG-IP with one interface on the F5 BIG-IP for all traffic (one-arm)

◉ Client – 10.168.56.30

◉ BIG-IP Virtual IP – 10.168.57.11

◉ BIG-IP Self IP – 10.168.57.10

◉ Server – 192.168.56.30

Scenario 1: With SNAT

From Client: Src: 10.168.56.30           Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.57.10 (Self-IP)     Dest: 192.168.56.30

In above scenario, the server will respond back to 10.168.57.10 and F5 BIG-IP will take care of forwarding the traffic back to the client. Here, the application server has visibility to the Self-IP 10.168.57.10 and not the client IP. 

Scenario 2: No SNAT

From Client: Src: 10.168.56.30           Dest: 10.168.57.11

From BIG-IP to Server: Src: 10.168.56.30       Dest: 192.168.56.30

In this scenario, the server will respond back to 10.168.56.30 and here is where comes in the complication, as the return traffic needs to go back to the F5and not the real client. One way to achieve this is to set the default GW of the server to the Self-IP of the BIG-IP and then the server will send the return traffic to the BIG-IP. BUT what if the server default gateway is not to be changed for whatsoever reason.  Policy based redirect will help here. The default gateway of the server will point to the ACI fabric, and the ACI fabric will be able to intercept the traffic and send it over to the BIG-IP.

With this, the advantage of using PBR is two-fold

◉ The server(s) default gateway does not need to point to F5 BIG-IP, but can point to the ACI fabric

◉ The real client IP is preserved for the entire traffic flow

◉ Avoid server originated traffic to hit BIG-IP, resulting BIG-IP to configure a forwarding virtual to handle that traffic. If server originated traffic volume is high, it could result unnecessary load the F5 BIG-IP

Before we get deeper into the topic of PBR below are a few links to help you refresh on some of the Cisco ACI and F5 BIG-IP concepts

◉ Cisco ACI fundamentals

◉ SNAT and Automap

◉ F5 BIG-IP modes of deployment

Now let us look at what it takes to configure PBR using a Standalone F5 BIG-IP Virtual Edition in One-Arm mode.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

To use the PBR feature on APIC – Service graph is a MUST



Configuration on APIC


1) Bridge domain ‘F5-BD’

◉ Under Tenant->Networking->Bridge domains->’F5-BD’->Policy
◉ IP Data plane learning – Disabled

2) L4-L7 Policy-Based Redirect

◉ Under Tenant->Policies->Protocol->L4-L7 Policy based redirect, create a new one
◉ Name: ‘bigip-pbr-policy’
◉ L3 destinations: F5 BIG-IP Self-IP and MAC
◉ IP: 10.168.57.10
◉ MAC: Find the MAC of interface the above Self-IP is assigned from logging into the F5 BIG-IP (example: 00:50:56:AC:D2:81)

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

3) Logical Device Cluster- Under Tenant->Services->L4-L7, create a logical device

◉ Managed – unchecked
◉ Name: ‘pbr-demo-bigip-ve`
◉ Service Type: ADC
◉ Device Type: Virtual (in this example)
◉ VMM domain (choose the appropriate VMM domain)
◉ Devices: Add the F5 BIG-IP VM from the dropdown and assign it an interface
◉ Name: ‘1_1’, VNIC: ‘Network Adaptor 2’
◉ Cluster interfaces
◉ Name: consumer, Concrete interface Device1/[1_1]
◉ Name: provider, Concrete interface: Device1/[1_1]

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

4) Service graph template

◉ Under Tenant->Services->L4-L7->Service graph templates, create a service graph template
◉ Give the graph a name:’ pbr-demo-sgt’ and then drag and drop the logical device cluster (pbr-demo-bigip-ve) to create the service graph
◉ ADC: one-arm
◉ Route redirect: true

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

5) Click on the service graph created and then go to the Policy tab, make sure the Connections for the connectors C1 and C2 and set as follows:

◉ Direct connect – True
◉ Adjacency type – L3

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

6) Apply the service graph template

◉ Right click on the service graph and apply the service graph
◉ Choose the appropriate consumer End point group (‘App’) provider End point group (‘Web’) and provide a name for the new contract
◉ For the connector
◉ BD: ‘F5-BD’
◉ L3 destination – checked
◉ Redirect policy – ‘bigip-pbr-policy’
◉ Cluster interface – ‘provider’

Once the service graph is deployed, it is in applied state and the network path between the consumer, F5 BIG-IP and provider has been successfully setup on the APIC

Configuration on BIG-IP


1) VLAN/Self-IP/Default route

◉ Default route – 10.168.57.1
◉ Self-IP – 10.168.57.10
◉ VLAN – 4094 (untagged) – for a VE the tagging is taken care by vCenter

2) Nodes/Pool/VIP

◉ VIP – 10.168.57.11
◉ Source address translation on VIP: None

3) iRule (end of the article) that can be helpful for debugging

Few differences in configuration when the BIG-IP is a Virtual edition and is setup in a High availability pair



2) APIC: Logical device cluster

◉ Promiscuous mode – enabled

◉ Add both BIG-IP devices as part of the cluster

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

3) APIC: L4-L7 Policy-Based Redirect

◉ L3 destinations: Enter the Floating BIG-IP Self-IP and MAC masquerade

Configuration is complete, let’s look at the traffic flows.

Client-> F5 BIG-IP -> Server

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Server-> F5 BIG-IP -> Client


In Step 2 when the traffic is returned from the client, ACI uses the Self-IP and MAC that was defined in the L4-L7 redirect policy to send traffic to the BIG-IP.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

iRule to help with debugging on the BIG-IP

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Output seen in /var/log/ltm on the BIG-IP, look at the event <SERVER_CONNECTED>

Scenario 1: No SNAT -> Client IP is preserved


Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

If you are curious of the iRule output if SNAT is enabled on the BIG-IP – Enable AutoMap on the virtual server on the BIG-IP

Scenario 2: With SNAT -> Client IP not preserved.

Cisco Data Center, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Study Material

Tuesday, 16 February 2021

For Banks – The Contact Center is Your Best Friend

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material, Cisco Learning

For years, the album that sold the most units was Carole King’s “Tapestry”. Estimates are that this record has sold more than 25 million copies. Rife with well-known songs, an interesting comment made by one of the initial reviewers in 1971 called the song “You’ve Got a Friend” the “core” of and “essence” of the album. It didn’t hurt that James Taylor’s version also became a monster hit. For banks, they too have a friend – in their contact centers.

The malls emptied, and the contact centers filled up

The last twelve months have initiated a renaissance in contact center operations. While the modernization of contact centers had been on a steady march, the realities of 2020 suddenly presented a giant forcing function changing the customer engagement landscape in a dramatic fashion. In one fell swoop, 36 months of planned investment in modernizing contact centers accelerated into a single 12-month period. As the physical world was shut down, the digital world ramped up dramatically. Banks saw branch visits slow to a crawl, and digital and contact center interactions increased by orders of magnitude. In addition, up to 90% of contact center agents were sent home to work, with estimates that a majority of them will stay there over time as indicated by this Saddletree Research analysis:

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material, Cisco Learning

Prior planning prevented poor performance


Fortunately, banks and credit unions were one of the key vertical markets that were relatively prepared for 2020 and were able to lean into the challenges presented, though this was not to say things went perfectly. What was behind this preparation and what were these organizations doing prior and during the crisis? And what should they do in the years ahead?

The “Digital Pivot” paid huge dividends


At their core, banks and credit unions collect deposits and loan them out at (hopefully) a profit. With money viewed as a commodity, financial services firms were one of the first industries to understand the only two sustainable differentiators they possessed were the customer experience they delivered, and their people. It is interesting that these are the main two ingredients which comprise a contact center!

For many banks prior to 2010, the biggest challenge for contact center operations consisted of navigating mergers and acquisitions when combining operations. Normalizing operations during mergers often manifested itself in a giant IVR farms meant to absorb large amounts of voice traffic. Prior form factors for self-service were not know as “low-effort” propositions, and customer experience scores suffered for years. Banks as an aggregate industry dropped below all industry averages for customer experience, after leading for years.

The mobile revolution presented a giant reset for banking customer experience. Financial institutions by and large have done an excellent job of adopting mobile applications to the delight of their customers. In response, customer experience scores in banking have steadily risen the past 10 years, and banks are near the top quartile again, only trailing consumer electronics firms and various retailers.

Banks are more like a contact center than you think


Banks and contact centers have very common characteristics. Both wrap themselves in consumer-friendly self-service applications which automate formerly manual processes that required human assistance. These include popular customer engagement platforms such as mobile applications and ATMs. In the contact center this dynamic involves speech recognition, voice biometrics, and intelligent messaging.

As self-service has become increasingly popular, live interactions that are left over for both the branch and the contact center have become more complex, difficult to solve on the first try, and requiring collaborative, cross business resolution by the individual servicing the customer. These types of interactions are known as “outliers”. In this situation the contact center becomes in essence, a “digital backstop” where the consumer interacts with self service first and then and only then seeks live assistance.

Prior planning prevents poor performance part II


The digital tsunami started in 2010 via the mass adoption of mobile applications by banks, giving this industry in particular a significant head start on the “outlier” dynamic. Therefore in 2020 when the shopping malls emptied out and contact centers filled up, banks had already been operating tacitly in the “outlier” model for a number of years and were in a better position to succeed. Applications such as intelligent call back, integrated consumer messaging, work at home agents, voice biometrics, A.I. driven intelligent chat bots, and seamless channel shift from mobile applications to the contact center were already in place to some extent for leading financial institutions.

Thinking ahead


With much of the focus on contact center, automation in banking has been able to extend A.I. into the initial stages of customer contact. The road ahead will include wrapping A.I. driven intelligence to surround contact center resources during an interaction, essentially creating a new category of resources known as “Super Agents”. In this environment, all agents in theory can perform as the best agents because learnings from the best performers are automatically applied throughout the workforce. In addition, Intelligent Virtual Assistants, or IVAs, will act as “digital twins” for contact center agents – automatically looking for preemptive answers to customers questions, and automating both contact transcripts and after call work documentation and follow up.

Yes, if you’re a bank, you have a friend in your contact center


Banks made the pivot to delivering better customer experience in their contact centers during the “Digital Pivot” in the early 2010s. From there, banks made steady progress to reclaim their CX leadership and delivering excellent customer experiences. The realities of 2020 accelerated contact center investment by at least 36 months into a 12-month window. Banks which had established leadership utilized this forcing function to accelerate a next generation of customer differentiators, firmly entrenched in themselves as category leaders in the financial services industry. Other institutions can utilize these unique times to play rapid catch-up. Who benefits? Their customers.

Source: cisco.com

Friday, 12 February 2021

Cloud-based Solutions can Empower Financial Services Companies to Adapt While Cutting Costs

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Preparation, Cisco Career

IT professionals in financial services have been instrumental to ensuring the integrity of global financial markets over the last year. Their hard work has helped keep the world’s largest economies working and financial aid flowing to those who need it most.

For them, few things remain unchanged from the pre-COVID world. Many network engineers had their hands full supporting large scale migrations to remote working. But aside from that, one constant during this time of change is that IT budgets are not increasing. “Do more with less,” “Reduce costs,” and “Extract more value,” are a few common mantras. The message is clear—each dollar spent on IT projects must have a tangible business benefit associated with it. With this increased focus on efficiency and cost, now is the perfect time for financial services companies to consider investing in cloud-based IT.

Benefits of cloud-based IT

Migrating IT infrastructure to a cloud-based platform can help improve efficiency and reduce costs for finserv companies by accelerating business processes, simplifying technology, and boosting operational efficiency. Today’s reality has required businesses to rethink how to help their employees collaborate safely while working from remote locations as they begin the return to work. By leveraging cloud-based solutions, workers and IT support teams are able to troubleshoot issues quicker, reduce downtime, and lower costs both for employees and for the end-customer.  

Supporting rapid change 

Before COVID, financial services companies were embarking on their cloud journey in pockets, with the primary focus on software development environments and connections to provide staff with secure connectivity. The rapid changes required for companies to function during the early days of the pandemic necessitated quick adoption of cloud-based technologies for enterprise voice, contact centers, remote access and network security. Projects that would have taken weeks or months were now being done in hours or days, driven by a need to get lines-of-business operational and keep companies viable. Now that the industry has successfully dealt with the crises of 2020, and have been operating in the new normal for several months now, a few trends have emerged that will drive IT decisions going forward— including preparing for a return to work and facilitating future growth.

Preparing for return to work

While bank branches never closed, most campuses and offices did. Optimistic news around vaccine development and distribution has led many companies to prepare for the return to work and reconsider the landscape for the office environment.  

For example, adding cameras could help ensure compliance around masks and social distancing policies. Access sensors could help track room occupancy and ensure timely and consistent sanitation practices. In a traditional environment, implementing such practices could take up to a year. However, by taking advantage of the ability to configure a network and add components to that network without configuration of individual components, we can continue to meet the accelerated timelines required for the return to work.

Scaling for the future

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Preparation, Cisco Career
Traditional companies deal with mergers and acquisitions, but for financial services companies, growth is typically purchased. Network teams are not revenue generators, and as a result, mergers have historically been underfunded and understaffed. The inevitable outcome of years or decades of that reality is a patchwork quilt of networks that are all sort-of connected. Each legacy organization retains some idiosyncrasies, issues, and non-standard hardware that requires specialized support personnel. That complexity leads to lower velocity than what lines-of-business have come to expect throughout the pandemic.  With everything needed to deploy a branch, campus, or office network, cloud adoption takes advantage of the appetite that company departments have developed for speed. This emphasizes the critical need to scale for the future growth of financial services companies and the need for simplicity.  

All in all, the events of 2020 have been a catalyst for change and digital transformation within the financial services sector. Cisco Meraki offers solutions to address the challenges that come with such abrupt changes including facilitating the campus and client network, creating operational efficiencies, and reducing downtime and loss of revenue.

Source: cisco.com

Thursday, 11 February 2021

Cisco introduces Fastlane+ with advanced multi user scheduling to revolutionize real-time application experience

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco and Apple continue to work together to deliver better experiences for customers through collaboration and co-development. Our latest project, Fastlane+, builds on the popular Fastlane feature by adding Advanced Scheduling Request to take QoS management a step further by scheduling and carving out airtime for voice and video traffic on Wi-Fi 6 capable iPhone and iPad devices. This facilitates a superior experience with latency-sensitive collaboration applications such as WebEx and FaceTime.

What is FastLane+, and why do we need it?

First and foremost, let’s take a look at the motivation behind Fastlane+. The 802.11ax standard introduced OFDMA and MU-MIMO as uplink transmission modes to allow scheduled access-based uplink transmissions. This allows the access point (AP) to dynamically schedule uplink OFDMA or MU-MIMO based on the client’s uplink traffic type and queue depth. This decision is made on a per Access Category basis and at the start of every Transmit opportunity (TXOP) with OFDMA used for latency centric low bandwidth applications. In contrast, MU-MIMO is used when higher bandwidth is required.

With Fastlane+, the Cisco AP learns the client’s uplink buffer status using a periodic trigger mechanism known as Buffer Status Report Poll (BSRP). Nevertheless, the client devices may not be able to communicate their buffer status to the AP in a timely manner due to MU EDCA channel access restrictions and possible scheduling delays in dense environments. Additionally, the AP may not always be able to allocate adequate resource units that fulfill application requirements. Because of this, a better approximation of uplink buffer status is critical for efficient uplink scheduling.

Next, let’s compare 802.11ax standards-based approaches for uplink scheduling- UL OFDMA and Target Wakeup Time (TWT). As highlighted in the chart below, with UL OFDMA, the AP has absolute control over uplink scheduling, while in the case of TWT, the client can pre-negotiate TWT service periods. A compromise thus needs to be made between the AP and client to improve uplink scheduling efficiency in a dense RF environment with latency-sensitive traffic.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Fastlane+ is designed to approximate better the client’s buffer status based on application requirements indicated by the client. This estimation policy significantly reduces BSRP polling overhead as compared to the default BSR based UL OFDMA scheduling. Along with obtaining key parameters for active voice and video sessions to improve uplink scheduling efficiency, Fastlane+ also solicits periodic scheduling feedback from the clients.

In a nutshell, Fastlane+ enhances the user experience for latency-sensitive voice and video applications in a high-density user environment by improving the effectiveness of estimating the uplink buffer status for the supported 802.11ax clients.

Key considerations for Fastlane+


Fastlane+ is initiated for latency-sensitive voice and video applications like WebEx, FaceTime, and others, whose traffic characteristics can be better approximated. Fastlane+ is indicated in DEO IE by the AP and Advanced Scheduling Request (ASR) specific information from the clients, including ASR capability, ASR session parameters, and ASR statistics. This information is sent using Vendor-Specific Action frames that are protected using PMF (protected management frame).

Latency becomes a concern only when there is enough contention in the medium due to high channel utilization. Consequently, Fastlane+ based uplink TXOPs are allocated only when the channel utilization is higher than 50%.

System overview for Fastlane+


The diagram below shows a bird’s-eye view of an end-to-end system to support Fastlane+. Fastlane+ specific configurations can be managed from the controller’s GUI and CLI. Uplink Latency statistics provided by the clients to the AP are also displayed on the controller. These latency statistics are on a per client basis and triggered with/without an active ASR session.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Fastlane+ benefits:


To better understand the benefits of Fastlane+, let’s first define key performance indicators of a typical voice and video application. Mean opinion score (MOS) is a standard measure for quality of experience for voice applications. It is quantified on a scale of 1 – 5, with 5 being the highest and 1 lowest. To put things in perspective, 3.5 is the minimum requirement for service provider grade quality.

For measuring video quality, we use the Delay factor. This evaluates the size of the jitter buffer to eliminate the video interruptions due to network jitter. The lower the delay factor (in milliseconds), the better the video quality.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Test considerations:


Results below are from a typical collaboration application with simulation tests performed under a high channel utilization and controlled RF environment. 16 numbers of Wi-Fi 6 capable iPhone in 80Mhz bandwidth were used.

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Learning, Cisco Certification, Cisco Preparation

Adios to choppy voice and video calls


With Fastlane+, you get a better Wi-Fi experience when you are collaborating with friends and colleagues. It doesn’t’t matter if you are in highly congested RF environments such as schools, offices, high-density housing, shopping malls, airports, or stadiums; Fastlane+ has you covered. So, when we’re all ready to come back, the network will be ready and waiting.

Fastlane+ is enabled by default on 802.11ax capable iPhone and iPad devices running iOS 14 or later. On the infrastructure side, it is currently supported on the Cisco Catalyst 9130 Access point. On AireOS WLC platforms, the 8.10 MR4 (8.10.142.0) release has CLI based support of the feature. On Catalyst 9800 Series WLC platforms, the 17.4.1 release has CLI and GUI (client data monitoring) support. Whereas, configuration tab in GUI will be in later releases. Please note, the Fastlane+ feature is listed as “Advanced Scheduling Request” in the CLI and GUI.