Sunday 9 May 2021

Cisco introduces Dynamic Ingress Rate Limiting – A Real Solution for SAN Congestion

I’m sure we all agree Information Technology (IT) and acronyms are strongly tied together since the beginning. Considering the amount of Three Letter Acronyms (TLAs) we can build is limited and now exhausted, it comes with no surprise that FLAs are the new trend. You already understood that FLA means Four Letter Acronym, right? But maybe you don’t know that the ancient Romans loved four letter acronyms and created some famous ones: S.P.Q.R. and I.N.R.I.. As a technology innovator, Cisco is also a big contributor to new acronyms and I’m pleased to share with you the latest one I heard: DIRL. Pronounce it the way you like.

Please welcome DIRL

DIRL stands for Dynamic Ingress Rate Limiting. It represents a nice and powerful new capability coming with the recently posted NX-OS 8.5(1) release for MDS 9000 Fibre Channel switches. DIRL adds to the long list of features that fall in the bucket of SAN congestion avoidance and slow drain mitigation. Over the years, a number of solutions have been proposed and implemented to counteract those negative occurrences on top of Fibre Channel networks. No single solution is perfect, otherwise there would be no need for a second one. In reality every solution is best to tackle a specific situation and offer a better compromise, but maybe suboptimal in other situations. Having options to chose from is a good thing.

DIRL represents the newest and shining arrow in the quiver of the professional MDS 9000 storage network administrator. It complements existing technologies like Virtual Output Queues (VOQs), Congestion Drop, No credit drop, slow device congestion isolation (quarantine) and recovery, portguard and shutdown of guilty devices. Most of the existing mitigation mechanisms are quite severe and because of that they are not widely implemented. DIRL is a great new addition to the list of possible mitigation techniques because it makes sure only the bad device is impacted without removing it from the network. The rest of devices sharing the same network are not impacted in any way and will enjoy a happy life. With DIRL, data rate is measured and incremental, such that the level of ingress rate limiting is matched to the device continuing to cause congestion. Getting guidance from experts on what mitigation technique to use remains a best practice of course, but DIRL seems best for long lasting slow drain and overutilization conditions, localizing impact to a single end device.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career

The main idea behind DIRL


DIRL would deserve a very detailed explanation and I’m not sure you would keep reading. So I will just drive attention to the main concept which is quite simple actually and can be considered the proverbial egg of Columbus. Congestion in a fabric manifests when there is more data to deliver to a device than the device itself is able to absorb. This can happen under two specific circumstances: the slow drain condition and the solicited microburst condition (a.k.a. overutilization). In the SCSI/NVMe protocols, data does not move without being requested to move. Devices solicit the data they will receive via SCSI/NVMe Read commands (for hosts) or XFR_RDYs frames (for targets). So by reducing the number of data solicitation commands and frames, that will in turn reduce the amount of data being transmitted back to that device. This reduction will be enough to reduce or even eliminate the congestion. This is the basic idea behind DIRL. Simple, right?

The real implementation is a clever combination of hardware and software features. It makes use of the static ingress rate limiting capability that is available for years on MDS 9000 ASICs and combines that with the enhanced version of Port Monitor feature for detecting when specific counters (Tx-datarate, Tx-datarate-burst and TxWait) reach some upper or lower threshold. Add a smart algorithm on top, part of the Fabric Performance Monitor (FPM), and you get a dynamic approach for ingress rate limiting (DIRL).

DIRL is an innovative slow drain and overutilization mitigation mechanism that is both minimally disruptive to the end device and non-impactful to other devices in the SAN. DIRL solution is about reducing those inbound SCSI Read commands (coming from hosts) or XFR_RDY frames (coming from targets) so that solicited egress data from the switch is in turn reduced. This brings the data being solicited by any specific device more in line with its capability to receive the data. This in turn would eliminate the congestion that occurs in the SAN due to these devices. With DIRL, MDS 9000 switches can now rate limit an interface from about 0.01% to 100% of its maximum utilization and periodically adjust the interface rate dynamically up or down, based on detected congestion conditions. Too cool to be true? Well, there is more. Despite being unique and quite powerful, the feature is provided at no extra cost on Cisco MDS 9000 gear. No license is needed.

Why DIRL is so great


By default, when enabled, DIRL will only work on hosts. Having it operate on targets is also allowed, but considering that most congestion problems are on hosts/initiators and the fact a single target port has a possible impact on multiple hosts, this needs to be explicitly configured by the administrator. With DIRL, no Fibre Channel frames are dropped and no change is needed on hosts nor targets. It is compatible with hosts and targets of any vendor and any generation. It’s agentless. It is a fabric-centric approach. It is all controlled and governed by the embedded intelligence of MDS 9000 devices. Customers may chose to enable DIRL on single switch deployments or large fabrics. Even in multiswitch deployments or large fabrics DIRL can deployed on only a single switch if desired (for example for evaluation). Clearly DIRL operates in the lower layers of the Fibre Channel stack and this makes it suitable for operation with both SCSI and NVMe protocols.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career

DIRL makes dynamic changes to the ingress rate of frames for specific ports and it operates on two different time scales. It is reasonably fast (seconds) to reduce the rate with 50% steps and it is slower (minutes) to increase it with 25% steps. The timers, thresholds and steps have been carefully selected and optimized based on real networks where it was initially tested and trialed with support by Cisco engineering team. But they are also configurable to meet different user requirements.

The detailed timeline of operation for DIRL


To explain the behavior, let’s consider an MDS 9000 port connected to a host. The host mostly sends SCSI Read Commands and gets data back, potentially in big quantities. On the MDS 9000 port, that would be reflected as a low ingress throughput and a high egress throughput. For this example, let’s say that Port Monitor has its TxWait counter configured with the two thresholds as follows:

Rising-threshold – 30% – This is the upper threshold and defines where the MDS 9000 takes action to decrease the ingress rate.

Falling-threshold – 10% – This is the lower threshold and defines where the MDS 9000 gradually recovers the port by increasing the ingress rate.

The stasis area sits between these two thresholds, where DIRL is not making any change and leaves the current ingress rate limit as-is.

The crossing of TxWait rising-threshold indicates the host is slow to release R_RDY primitives and frames are waiting in the MDS 9000 output buffers longer than usual. This implies the host is becoming unable to handle the full data flow directed to it.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Guides, Cisco Career

The timeline of events in the diagram is like this:

T0: normal operation

T1: TxWait threshold crossed, start action to reduce ingress rate by 50%

T1+1 sec: ingress rate reduced by 50%, egress rate also reduced by some amount, TxWait counter still above threshold

T1+2 sec: ingress rate reduced by another 50%, egress rate also reduced by some amount, TxWait counter now down to zero, congestion eliminated

T5: Port has had no congestion indications for the recovery interval. Ingress rate increased by 10%, egress rate increases as well by some amount, TxWait still zero

T5 + 1 min: Port to have no congestion indications for the recovery interval. Ingress rate increased by another 10%, egress rate increases as well by some amount, TxWait still zero

T5 + 2 min: Port to have no congestion indications for the recovery interval. Ingress rate increased by another 10%, egress rate increases as well by some amount, TxWait jumps to 8% but still below falling-threshold.

T5 + 3 min: ingress rate increased by another 10%, egress rate increases as well by some amount, TxWait jumps to 20%. This puts TxWait between the two thresholds. At this point recovery stops and the current ingress rate limit is maintained.

T5 + 4 min: ingress rate was not changed, egress rate experienced a minor change, but TxWait jumped above upper threshold, congestion is back again and remediation action would start again.

Source: cisco.com

Thursday 6 May 2021

Native or Open-source Data Models? Use Both for Software-Defined Enterprise Networks.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Career, Cisco Certification

Enterprise IT administrators with hundreds, thousands, or even more networking devices and services to manage are turning to programmable, automated, deployment, provisioning, and management to scale their operations without having to scale their costs. Using structured data models and programmable interfaces that talk directly to devices―bypassing the command line interface (CLI)―is becoming an integral part of software-defined networking.

As part of this transformation, operators must choose:

◉ A network management protocol (e.g., Netconf or Restconf) that is fully supported in Cisco IOS XE 

◉ Data models for configuration and management of devices 

One path being explored by many operators is to use Google Remote Procedure Calls (gRPC), gRPC Network Management Interface (gNMI) as the network management protocol. But which data models can be used with gNMI? Originally it didn’t support the use of anything other than OpenConfig models. Now that it does, many developers might not realize that and think that choosing gNMI will result in limited data model options. Cisco IOS XE has been ungraded to fully support both our native models and OpenConfig models when gNMI is the network management protocol.  

Here’s a look at how Cisco IOS XE supports both YANG and native data models with the “mixed schema” approach made possible with gNMI.  

Vendor-neutral and Native Data Models 

Cisco has defined data models that are native to all enterprise gear running Cisco IOS XE and other Cisco IOS versions for data center and wireless environments. Other vendors have similar native data models for their equipment. The IETF has its own guidelines based on YANG data models for network topologies. So too does a consortium of service providers led by Google, whose OpenConfig has defined standardized, vendor-neutral data modeling schemas based on the operational use cases and requirements of multiple network operators.   

The decision of what data models to use in the programmable enterprise network typically comes down to a choice between using vendor-neutral models or models native to a particular device manufacturer. But rather than forcing operators to choose between vendor-neutral and native options, Cisco IOS XE with gNMI offers an alternative, “mixed-schema” approach that can be used when enterprises migrate to model-driven programmability. 

Pros and Cons of Different Models 

The native configuration data models provided by hardware and software vendors support networking features that are specific to a vendor or a platform. Native models may also provide access to new features and data before the IETF and OpenConfig models are updated to support them. 

Vendor neutral models define common attributes that should work across all vendors. However, the current reality is that despite the growing scope of these open, vendor-neutral models, many network operators still struggle to achieve complete coverage for all their configuration and operational data requirements.  

At Cisco we have our own native data models that encompass our rich feature sets for all devices supported by IOS XE. Within every feature are most, if not all, of the attributes defined by IETF and OpenConfig, plus extra features that our customers find useful and haven’t yet been or won’t be added to vendor-neutral models. 

With each passing release of Cisco IOS XE there has been significant and steady growth both in the number of native YANG models supported and in the number of configuration paths supported (Figures 1 and 2). The number of paths provides an insight into the vast feature coverage we have in IOS –XE. As of IOS XE release 17.5.1, there are approximately 232000 XPaths covering a diverse set of features ranging from newer ones like segment routing to older ones like Routing Information Protocol (RIP). 

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Career, Cisco Certification
Figure 1. Quantity of YANG Data Models Per IOS XE Release

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Career, Cisco Certification
Figure 2. Quantity of YANG Configuration Paths Per IOS XE Release

Complete information on Cisco IOSXE YANG model coverage can be found in the public GitHub repository. The information there is updated with data from every new IOS XE release. 

For customers evaluating gNMI as their network management protocol it may feel like a dead-end when OpenConfig models are insufficient to handle a use case. It is important to understand that gNMI is not limited to OpenConfig models and vendors can make their native models available with gNMI.  

But are you stuck with having to choose between using OpenConfig or native models? No, the  mixed-schema approach to data modeling offers the best of both worlds. 

The gNMI Mixed Schema Approach 


The gNMI specification of RPCs and behaviors for managing the state on network devices was proposed by OpenConfig. It’s built on the open source gRPC framework and uses the protocol buffers interactive data language (Protobuf IDL). 

At Cisco we believe that our customers should have the best of both native and vendor-neutral data modeling worlds. Enterprise admins can consider using vendor-neutral models to standardize the core elements of configurations and then provide other functionality using the native device models. 

With the mixed–schema approach, operators can mix IETF, OpenConfig, and native models without sacrificing many of the advantages of a model-driven approach to configuration and operational data. Operators can disambiguate the schema source using the gNMI “origin” fields defined by the network device vendor.  

You don’t lose any transactional benefits when using a mixed-schema approach. This is critical as it means that an operator can, for example, issue a gNMI Set that combines configuration from an OpenConfig and native model in a single transaction. Any failure will cause the entire transaction to fail and removes the need for the operator to deal with complicated configuration rollback scenarios 

Support for gNMI in IOS XE 


The mixed schema approach is supported in IOS XE using the “openconfig” origin field for OpenConfig models and “rfc7951” for native models. Here is an example where a gNMI Set request is used to enable NTP on a network device.  

In this scenario, the operator relies on NTP configurations that are not specified in the OpenConfig model. The first update in the request is completely vendor-neutral and enables NTP on any device that supports the openconfig-system model. The second update is specific to Cisco IOS XE and is easily distinguished via the origin field. The two updates are combined in a single request which means that both must be successful for the whole Set transaction to succeed. There is no need for the operator to perform any complicated roll-back of the device configuration if one of the updates were to fail. 

update: < 

        path: < 

            origin: “openconfig“ 

            elem: < 

                name: “system” 

            > 

            elem: < 

                name: “ntp“ 

            > 

            elem: < 

                name: “config” 

            > 

       > 

        val: < 

            json_ietf_val:”{\”enabled\“:true,\”ntp-source-address\”:\”10.1.1.1\”,\”enable-ntp-auth\”:false}”  

        > 

    > 

  update: < 

        path: < 

            origin: “rfc7951” 

            elem: < 

                name: “Cisco-IOS-XE-native:native“ 

            > 

            elem: < 

                name: “ntp“ 

            > 

        > 

        val: < 

            json_ietf_val:”{\”Cisco-IOS-XE-ntp:mindistance\”:2,\”Cisco-IOS-XE-ntp:maxdistance\”:10}” 

        > 

    > 

This type of mixed-schema approach will work just as well for gNMI Get requests. 

Find Out More About gNMI  


The gNMI mixed-schema approach with Cisco IOS XE is an extremely useful tool that network operators can use to ease the transition to a model-driven, programmable, and automated network infrastructure. I highly recommend that every enterprise network administrator familiarize yourself with the gNMI specification and models. We’re moving to software-driven deployment and management, with software running in the cloud and on devices that automate network management. With two types of options available for data model-driven network management, you now don’t have to choose among them. You can have the best of both worlds.

Source: cisco.com

Wednesday 5 May 2021

Building Hybrid Work Experiences: Details Matter

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Study Materials

The Shift From Remote Work to a New Hybrid Work Environment

It’s exciting to see more organizations start to make the shift from remote work to a new hybrid work environment, but it’s a transition that comes with a new set of questions and challenges. How many people will return to the office, and what will that environment look like? Increasingly, we’re hearing that teams are feeling fatigued and disconnected – what tools can help to solve for that? And perhaps most importantly, how can we ensure that employees continuing to work remotely have the same connected and inclusive experience as those that return to the office?

When we think about delivering positive employee experiences, it’s the details that matter. This month, we’re rolling out new Webex features that solve for a few challenges that might be overlooked and yet play an important role in the new hybrid working model.

Give More and Get More From Your Meeting Experiences

Whether in the office or working remotely, no one wants to sit in meeting after meeting listening to a presenter drone on. Or worse, participating in a video meeting where there are so many talking heads and content being shared that you just don’t know where to focus. Making meetings more engaging is important in the world of hybrid work.

We’re expanding our Webex custom layouts functionality to include greater host controls, resulting in a more personalized and engaging meeting experience. As a meeting host or co-host, you have the ability to hide participants who are not on video, bringing greater focus to facial expressions and interactions with video users. Using the slider feature, you are able to show all participants on screen, or focus on just a few. And now hosts and co-hosts can curate and synchronize the content and speakers you want your attendees to focus on, and then push that view to all participants. This allows you to set a common “stage” and establish a more engaging meeting experience for all.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Study Materials

Setting the stage and sharing great content is one half of the equation – but wouldn’t it be helpful to know what your audience thinks about your meeting as well? Yesterday we announced the close of our acquisition of Slido, which brings best-in-class audience interaction capabilities to the Webex platform. With the ability to crowdsource questions, launch polls and quizzes, and solicit real-time feedback from your audience, the meeting experience just became a lot more interesting and valuable to both those hosting and those attending.

Work More Efficiently with Personalized Webex Work Modes


Your morning routine might start with a review of your meeting schedule, or checking unread messages. Or maybe you spend the majority of your time on calls. As a Webex user, you now have the flexibility to modify settings to default to the work mode that’s most important to you, allowing you to work your way more efficiently. And because all work modes come together seamlessly on the Webex platform, you can transition between messaging, calling and meeting with ease.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Study Materials

If you’re like me, you might rely on your message inbox as a To Do list that reminds you where to prioritize your time. To better support this workstyle preference, Webex now offers the ability to easily mark a message as unread, even if you’ve already read it. This visual reminder allows you to work efficiently and effectively – keeping up to speed on conversations, while ensuring that action items don’t fall off your radar.

Reimagining the Workplace


When it comes to planning for a safe and successful return to the office, business leaders are faced with new challenges large and small – everything from reconfiguring workspaces for a hybrid workforce to ensuring that shared surfaces and devices are kept clean and sanitized.

To ensure inclusive collaboration between employees in the office and those working remotely, we’re making it easier for all meeting attendees to be seen and heard. The Webex Room Kit Mini now offers a more powerful 5x digital zoom camera and optimized audio via internal microphones, providing clear audio up to 4 meters away. And the Webex Room Panorama is now configurable for low ceiling height and features flexible placement of content screens, enabling a wider variety of workspaces to support boardroom-style meeting experiences.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Study Materials

And talk about attention to details: Webex now enables easy sanitation of shared devices, with features like medical-grade, alcohol-resistant removable covers for phones and a wipe-down metal grill option on Webex Room Kit Mini devices.

Source: cisco.com

Tuesday 4 May 2021

8 Reasons why you should pick Cisco Viptela SD-WAN

20 years ago, I used to work as a network engineer for a fast-growing company that had multiple data centers and many remote offices, and I remember all the work required to simply onboard a remote site. Basically, it took months of planning and execution which included ordering circuits, getting connectivity up and spending hours, and sometimes days, deploying complex configurations to secure the connectivity by establishing encrypted tunnels and steering the right traffic across them. Obviously, all this work was manual. At the time I was very proud of the fact that I was able to do such complex configurations that required so many lines of CLI but that was the way things were done.

Read More: 300-420: Designing Cisco Enterprise Networks (ENSLD)

During the decade that followed, we saw a slew of WAN and encryption technologies become available to help with the demand and scale for secure network traffic. MPLS, along with frame Relay, became extremely popular and IPsec-related encryption technologies became the norm. All this was predicated on the fact that most traffic was destined to one clear location and that is the data center that every company had to build to store all its jewels including applications, databases and critical data. The data center also served as the gateway to the internet.

Cisco Prep, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Prep, Cisco Certification, Cisco SD-WAN

From a security perspective, the model was simple and had clear boundaries. All infrastructure within the enterprise was trusted and everything outside including the internet and DMZ was labeled as untrusted, so firewalls and other proper security devices were deployed at these boundaries mainly at the data center in order to protect the organization.

The decade that followed brought some disrupting trends. We moved from desktops to laptops and then mobile devices became the norm. We became more dependent on voice and video services which meant regular infrastructure updates were frequently needed to deal with increasing demands for bandwidth.

As WAN services became more critical, businesses had to invest in expensive redundant links of which the secondary link was sitting idle designed as a backup link in case of a primary link failure. Although there were some challenges, this model worked out pretty well for some time.

The rise of Cloud Computing


Although Cloud Computing has been around since the early 2000s, rapid adoption did not materialize until recently due to multiple factors including general lack of trust and security concerns. Over the last 5 years, however, a new trend picked up and many organizations started to see benefits to cloud computing that allowed for cost saving and more flexibility. For example, a small company can now have their servers run on a cloud Service provider (CSP) the likes of AWS or Azure rather than having to spend tons of Capex money to build a data center. Basically, mindsets are changing even in conservative sectors such as Financials as per the following quote from a banking customer.

“In 2020, we left our data centers behind and moved to the public cloud to create exceptional banking experiences for our customers. The agility, scalability and elasticity of the cloud are helping us build the bank of the future”

In addition, Software as a Service (SaaS) is another trend that is also changing the way we consume applications. A long list of critical applications that include Office 365, Salesforce, WebEx, Box and many more are now being served from the cloud.

While moving to the cloud trend has been accelerating over the last 5 years the COVID pandemic has sure made this trend accelerate exponentially and with it the need for a new architecture that is better suited to address these new diverse challenges.

Cisco Prep, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Prep, Cisco Certification, Cisco SD-WAN

The need for SD-WAN


As organizations increasingly adopt SaaS and IaaS, the old model of networking will no longer work for the main reason that services are no longer residing in one place but are now distributed across the internet on multiple clouds. Basically, we can no longer rely on the data center as the gateway to the internet because going that route no longer gives us the optimal path and thus introduces more latency culminating in sub-optimal user application experience. Also Increased traffic at the data center requires expensive links as well as network and security equipment that can support the throughput.

In addition, the customer consumption model for connectivity is changing and rather than spending a lot of money on expensive MPLS links, companies now can utilize their branch backup links or go with cheaper ones at a fraction of the cost. Although direct internet links (DIA) provide a great way to offload noncritical internet traffic, using it beyond that will require those links to be secured and to do so brings more challenge to IT organizations.

Software Defined WAN was introduced to solve all these problems by decoupling the data plane from the control and management plane, creating a secure overlay and, similar to a car GPS, providing the intelligence to route a packet to the right destination avoiding traffic congestion attributed to loss, latency and jitter. Most importantly, it relies on a single management interface that made the provisioning and management of WAN extremely simple.

Why Cisco Viptela?


Cisco acquired Viptela, a leading SD-WAN provider in 2017. Since then, Cisco has integrated the solution into its long line of WAN routers, introduced the Catalyst 8K family (a new router platform that was designed specifically for SD-WAN and Cloud), added a long list of cloud innovations by working with leading Cloud Service Providers (CSPs) and deployed the solution at thousands of customer sites. In order to better understand the benefit that Cisco Viptela brings let’s breakdown the conversation into the following 8 key areas:

Centralized Management: One of the key benefits that Cisco Viptela provides is the use of centralized management using vManage to not only provision and monitor SD-WAN fabric policies but to also provide capabilities to integrate with external systems such as provisioning transit gateways on AWS and automating tunnel creation to a Secure Internet Gateway (SIG) thus providing the administrator with one tool to simplify solution roll out.

Bandwidth Augmentation: The ability to offload traffic from expensive MPLS links can be achieved due to the fact that Viptela SD-WAN is link agnostic so multiple internet links can achieve the same availability and performance as a single premium link at the fraction of the price and can still meet the same SLA

Application Performance Optimization: Applications have different requirements when it comes to quality of service. Some may have issues with little delay, some are sensitive to loss and some behave poorly if there is jitter. SD-WAN features such as TCP optimization, DRE and Application-aware routing are among the tools that we can use to get around congestion issues and allows us to deliver optimal quality of experience.

Secure Direct Internet Access: Leveraging many years of security expertise, the Cisco Security stack which includes Firewall, IPS, URL filtering, TLS Proxy and advanced malware protection can be deployed at the branch or on Cloud using Cisco Umbrella which gives customers the confidence to utilize branch breakout links, saving cost and enhancing the overall application experience especially for cloud-based services.

Middle Mile Optimization: Colo presence provides a lot of value to customers that include direct access to CSPs through express routes, allows service chaining and much more. In this situation, Cisco SD-WAN extends the fabric and provides a management interface to onboard and manage the environment.

Cisco Prep, Cisco Preparation, Cisco Career, Cisco Learning, Cisco Prep, Cisco Certification, Cisco SD-WAN

Cloud OnRamp for IaaS: The key benefit of this feature is that it not only allows us to use the same simple flow to automate connectivity to all key Cloud Service Providers which include AWS, Azure and GCP, but once the SD-WAN Fabric is extended to the cloud, then customers will get to use all the features available to SD-WAN on the Cloud and all configurations can be done from the same vManage Console. In certain cases, the CSP provider network can be used as a backbone for passing site-to-site traffic thus reducing latency to a specific destination.

Cloud onRamp for SaaS: This feature provides optimal experience for SaaS applications by utilizing internal probing and external telemetry received from SaaS application vendors. Microsoft Office 365 offers a great example of this feature. In addition to the probing intelligence built into SD-WAN, Microsoft will send key URLs along with new recommendations based on internal dynamic data.

Analytics: The Cisco vAnalytics platform is offered as a Service and provides a graphical interface of the fabric performance with the ability to drill down into specific areas such as network availability, carrier, tunnel and application performance. Other Cisco applications such as Cisco StealthWatch and Cisco ThousandEyes can also be used to provide more analytics.

In summary, as the future of networking turn into the cloud, the internet will now play a critical role similar to the role that LAN played in the past. Cisco Viptela SD-WAN a highly reliable and resilient solution with its rich features integrating Cloud optimization, security and advanced analytics can play a major role in helping organizations manage this disruptive WAN phase and will be the foundation for Secure Edge Service Edge (SASE), but that will be another discussion for another blog.

Source: cisco.com

Monday 3 May 2021

Cisco – the Bridge to an API-first, Cloud Native World

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career

The traditional development of applications is giving way to a new era of modern application development.

Modern apps are on a steep rise. Increasingly, the application experience is the new customer experience. Faster innovation velocity is needed to deliver on constantly changing customer requirements. The cloud native approach to modern application development can effectively address these needs:

◉ Connect, manage and observe across all physical, virtual, and cloud native API assets

◉ Secure and develop using distributed APIs, Applications and Data Objects

◉ Full-stack observability from API through bare metal and across multi-SaaS are table-stakes for globally distributed applications

“Shifting to an API-first application strategy is critical for enterprise organizations as they rearchitect their future portfolio,” said Michelle Bailey, Group Vice President, General Manager and Research Fellow at IDC.

Modern World of App Development

Modern applications are built using composable cloud native architectures. We see more modern apps being built and deployed because application components are disaggregated into reusable services from a single integrated image. These and other API’s are distributed across multiple SaaS, cloud or on-premises. Developers can use API’s from across all these properties to build their apps.

The benefits of this ideal API-first world in a native hybrid-cloud, multi-cloud future are:

◉ Greater uptime: only the components requiring an upgrade are taken offline as opposed to the entire application

◉ Choice: ability for developers to pick and choose APIs wherever they reside and matter most to their applications and businesses gives them flexibility

◉ Agile teams: small teams focused on a specialized component of the application means developers can work more autonomously, increasing velocity while enhancing collaboration with SRE and security teams

◉ Unleashing innovation: enables developers to specialize in their areas of expertise

Cisco — The Bridge to an API-first, Cloud Native World

We want a future where a developer can pick and choose any API’s (internal or 3rd party) needed to build their app, consume them and assign policies consistently across multiple providers. Connect, secure, observe and upgrade apps easily. Have confidence in the reputation and security of the apps, right from the time the IDE is fired and into production.

And do all of this with velocity and minimal friction between development and  cloud engineering/SRE teams.

Even as some sprint out ahead in the cloud native world, bringing everyone along – wherever they are on their journey – is a top priority. We still need to bridge back to where the data resides in data centers, SaaS vendors, and in some cases even mainframes.

Developers will need to discover and safely connect to where the data and APIs are – wherever they reside. And be able to see top/down in apps. Observability throughout the stack is needed.

For a seamless developer experience, it will need the following:

◉ API-layer connectivity will influence all networking. API discovery, policy, and inter-API data flows will drive all connectivity configuration and policies down the infrastructure stack

◉ APIs and Data Objects are the new security perimeter. The Internet is the runtime for all modern distributed applications, therefore the new security perimeters are now diffused and expanded

◉ Observability is key to API-first distributed cloud native apps. Observability at the API and cloud native layers with well defined Service Level Objectives (SLOs) and AI/ML driven insights to drive down Mean Time To Resolution (MTTR) to deliver of an exceptional application experience

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Exam Prep, Cisco Career
Here’s how we’re solving for it:

◉ Portfolio of mesh and connectivity software services that allow for the discoverability, consumption, data flow connectivity and real-time observability of distributed modern applications built across cloud native services at any layer of the stack, and integrates with event-driven services that bridge the monolithic world to the new cloud native world

◉ Portfolio of security services that detect, scan, observe, score and secure APIs (internal or 3rd party), while simultaneously ensuring real-time compliance reporting and integrations with existing development, deployment and operating toolchains

Source: cisco.com

Sunday 2 May 2021

The Cloud can be Simple, Agile, and Secure for Broadcasters

At a time when production from anywhere is a must, those that create, distribute, and secure content needed to pivot overnight, adopting new tech and workflows. Back in the old days, the studio was the central repository for this content, but now that it needs to be securely shared online with teammates around the world who are working from their home offices, the challenges continue to mount.

Read More: 100-490: Supporting Cisco Routing and Switching Network Devices (RSTECH)

Now more than ever, high availability is needed for remote workers. This, along with fault tolerance and resiliency, are standards the media industry has long sought out when it comes to creating and managing content. The industry is faced with the need to transform and do it quickly, with content distribution using IP as the backbone and workloads like content creation and production in the cloud. The “new normal” will be a hybrid model that incorporates some work from anywhere combined with some on-premises activity. How do we accommodate this sudden shift in the way we do business?

Cisco Prep, Cisco Learning, Cisco Preparation, Cisco Exam Prep, Cisco Career
The Umbrella multi-function security solution

Cloud infrastructure has plenty of capacity but faces hurdles


The term “cloud” has become a ubiquitous word used throughout multiple industries and businesses around the world. It shouldn’t be surprising that broadcast and media enterprises are trying to leverage this common infrastructure for multiple workflows within their environment. However, the cloud can have many unique challenges for the media industry that make this transition a little more difficult to undertake.

Just move it to the cloud – sounds simple right? Everything these days is connected to the Internet through core routers in the data center (which are traditionally over-provisioned). However, it isn’t that easy. Groups like the Video Services Forum (VSF) semi-working group standards body, and the Society of Motion Picture and Television Engineers (SMPTE) are looking at solving problems over WAN infrastructure. For example, SMPTE 2110 is an attempt to move production to IP by breaking apart video, audio, and ancillary data to enable more flexible workflows. Now the problem with cloud infrastructure isn’t capacity but rather loss, jitter, and latency.

With real-time production, even the smallest of the aforementioned issues can have a large impact on content. This makes it unsurprising that the industry is starting to provide re-transmission mechanisms in video transport to allow a guarantee of transmission through cloud infrastructure. This has come through mechanisms such as Reliable Internet Stream Transport (RIST), a video protocol, and Secure Reliable Transport (SRT). With the reduction in cost and improvement of technology, large-scale distribution, processing, and other workflows can now be moved entirely to the cloud.

This migration from on-prem to the cloud can address agility and economy of scale. The infrastructure cloud provides can be more cost-effective for fulfilling spike workloads in an on-demand way. In the cloud, we treat everything as a resource pool that can be changed and re-provisioned as needed. This means we could use the same resources for ingest, transcode, playout, and others on an ad-hoc basis.

Agility is key in the “new normal”


Next to capital cost savings, this agility of workflows is the main reason for using the cloud. Agility also provides the ability to enable remote work when the ability to access physical infrastructure may be limited. This has been especially important during the recent health crisis. Media entities who were ahead of the game with cloud workflows had a head start on this. Some of these agile workflows can include media supply chain, from Media Asset Management (MAM) services, and beyond. With distribution services for linear broadcast and Video on Demand (VOD), we’ve even seen production workflows start to penetrate the cloud.

Security, visibility, and analytics are more important than ever


With all these workflows moving ahead simultaneously, we need to start thinking about visibility, security, and analytics and how they affect business as usual. We need to talk about visibility in the cloud because most of the time cloud services are accessed from outside the corporate network. Chances are the devices accessing these services are not controlled by IT, and this leaves a huge security gap in engineering and IT from being unable to monitor users’ traffic and applications.

Wouldn’t it be great if you could pinpoint network issues before they happened? ThousandEyes is a digital experience monitoring platform that provides end-to-end visibility of an entire workflow. This holistic view allows end-users to see where they could be impacted by the path the workflow is taking. For example, with the click of a button your Internet Service Provider (ISP) can see your entire network to pinpoint challenges that are causing problems, like maybe an old tablet that’s slowing down all your other devices. This gives the ISP the ability to take immediate action from fast troubleshooting to an even faster resolution.

Cisco Prep, Cisco Learning, Cisco Preparation, Cisco Exam Prep, Cisco Career
ThousandEyes Internet Insights™

Data control is another area that needs a closer look. Engineering and IT have less access to data when it’s stored in off-prem cloud services. Users can now access data from any location on any device, and this could include Bring Your Own Devices (BYODs). Along this line, Cisco Umbrella offers a single pane of glass encompassing the entire security portfolio, with a flexible portal that addresses the security of the cloud, connected devices, remote users, branch users, etc. This cloud-accessed security broker offers cloud firewalls, secure web gateways, and DNS layer security which can be done over SD-WAN, on-prem users, and most importantly remote users – having security orchestration over people’s houses and the links they might accidentally click.

There’s also the risk that cloud providers have privileged access to your data, making a chain of ownership controls imperative. This is especially true for our media customers. The media enterprise’s most valuable asset is content, and they need to know who’s accessing it. One interesting technology that can be leveraged here is blockchain.

Blockchain, the technology behind bitcoin, is a secure and encrypted digital database that can be shared between all parties in a distributed fashion. All transactions that occur with the data in question are recorded, verified, and stored in the database. This database is comprised of a distributed ledger technology, and in it, multiple copies of the data exist across the network instead of being centralized. This would be an ideal method to maintain a chain of ownership over media assets.

Source: cisco.com

Tuesday 27 April 2021

F5 & Cisco ACI Essentials – Dynamic pool sizing using the F5 ACI ServiceCenter

APIC EndPoints and EndPoint Groups

When dealing with the Cisco ACI environment you may have wondered about using an Application-Centric Design or a Network-Centric Design. Both are valid designs. Regardless of the strategy, the ultimate goal is to have an accessible and secure application/workload in the ACI environment. An application is comprised of several servers; each one performing a function for the application (web server, DB server, app server etc.). Each of these servers may be physical or virtual and are treated as endpoints on the ACI fabric. Endpoints are devices connected to the network directly or indirectly. They have an address , attributes and can be physical or virtual. Endpoint examples include servers, virtual machines, network-attached storage, or clients on the Internet. An EPG (EndPoint Group) is an object that contains a collection of endpoints, which can be added to an EPG either dynamically or statically. Take a look at the relationship between different objects on the APIC.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
ACI object relationship hierarchy

Relationship between Endpoints and Pool members


If an application is being served by web servers with IPs having address’s in the range 192.168.56.*, then these IP addresses will be present, as an endpoint in an endpoint group (EPG) on the APIC. From the perspective of BIG-IP these web servers are pool members on a particular pool.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Relationship between Endpoints and Pool members

The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform designed to run on the APIC controller. It has access to both APIC and BIG-IP and can correlate existing information on both to provide a mapping as follows.

BIG-IP                                                                APIC
VIP: Pool: Pool Members(s)               Tenant: Application Profile: End Point group

This gives an administrator a view of how the APIC workload is associated with the BIG-IP and what all applications and virtual IP’s are tied to a tenant. 

Dynamic EndPoint Attach and Detach


Lets think back to our application which is say being hosted on 100’s of servers, these servers could be added to an APIC EPG statically by a network admin or they could be added dynamically through a vCenter or openstack APIC integration. In either case there endpoints ALSO need to be added to the BIG-IP where the endpoints can be protected by malicious attacks and/or load-balanced. This can be a very tedious task for a APIC or a BIG-IP administrator.


Using the dynamic EndPoint attach and detach feature on the F5 ACI ServiceCenter this burden can be reduced. The application has the ability to adjust the pool members on the BIG-IP based on the server farm on the APIC. On APIC when an endpoint is attached, it is learned by the fabric and added to a particular tenant, application profile and EPG on the APIC. The F5 ACI ServiceCenter provides the capability to map an EPG on the APIC to a pool on the BIG-IP. The application relies on the attach/detach notifications from the APIC to add/delete the BIG-IP pool-members.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Mapping EPG to Pool members

There are different ways in which the dynamic mapping can be leveraged using the F5 ACI ServiceCenter based on the L4-L7 configuration. In all the scenarios described below the L4-L7 configuration is deployed on the BIG-IP using AS3 (flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system).

Scenario 1: Declare L4-L7 configuration using F5 ServiceCenter

Scenario 2: L4-L7 configuration already exists on the BIG-IP

Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ServiceCenter

Scenario 4: Use the F5 ServiceCenter API’s to define the mapping along with the L4-L7 configuration

Let’s take a look at each one in detail:

Scenario 1: Declare L4-L7 configuration using F5 ServiceCenter


Let’s assume there is no existing configuration on the BIG-IP, a new application needs to be deployed which is front ended by a VIP/Pool/Pool members. The F5 ACI ServiceCenter provides a UI that can be used to deploy the L4-L7 configuration and create a mapping between Pool <-> EPG

Step 1: Define an application using one of the in-built templates

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Defining an Application using built-in templates

Step 2: Click on the Manage Endpoint mappings button to create a mapping

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Managing Endpoint mappings

Scenario 2: L4-L7 configuration already exists on the BIG-IP


If L4-L7 configuration using AS3 already exists on the BIG-IP, the F5 ACI ServiceCenter will detect all partitions and application that in compatible with AS3. Configuration for a particular partition/application on BIG-IP can then be updated to create a Pool <-> EPG mapping. However there is one condition that the pool can either have static or dynamic members so if the pool already has existing members those will have to be deleted before a dynamic mapping can be created. To maintain the dynamic mapping , any future changes to the L4-L7 configuration on the BIG-IP should be done via the ServiceCenter.

Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ServiceCenter


The F5 ACI ServiceCenter can be used just for the dynamic mapping and pool sizing and not for defining the L4-L7 configuration. For this method the entire AS3 declaration along with the mapping will be directly send to the BIG-IP using AS3.

Sample declaration (The members and constants section creates the mapping between Pool<->EPG)

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Since the declaration is AS3, the F5 ACI ServiceCenter will automatically detect a Pool <-> EPG mapping which can be viewable from the inventory tab.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Scenario 4: Use the F5 ServiceCenter API’s to define the mapping along with the L4-L7 configuration


Finally if the UI is not appealing and automation all the way is the goal, then the F5 ServiceCenter has an API call where the mapping as well as the L4-L7 configuration which was done in Scenario 1 can be completely automated. Here the declaration is being passed to the F5 ACI ServiceCenter through the APIC controller and NOT directly to the BIG-IP.

URI:https://<apic_controller_ip>>/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json 

Body/declaration

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Having knowledge on how AS3 works is essential since it is a declarative API and using it incorrectly can result in incorrect configuration. Either method mentioned above works, the decision on which method to use is influenced on the operational model that works the best in your environment.

Source: cisco.com