Tuesday, 17 May 2022

Network Service Mesh Simplifies Multi-Cloud / Hybrid Cloud Communication

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

Kubernetes networking is, for the most part, intra-cluster. It enables communication between pods within a single cluster:

The most fundamental service Kubernetes networking provides is a flat L3 domain: Every pod can reach every other pod via IP, without NAT (Network Address Translation).

The flat L3 domain is the building block upon which more sophisticated communication services, like Service Mesh, are built:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials
Application Service Mesh architecture.

Fundamental to a service mesh’s capability to function is that the service mesh control plane can reach each of the proxies over a flat L3, and each of the proxies can reach each other over a flat L3.

This all “just works” within a single Kubernetes cluster, precisely because of the flat L3-ness of Kubernetes intra-cluster networking.

Multi-cluster communication


But what if you need workloads running in more than one cluster to communicate?

If you are lucky, all of your clusters share a common, flat L3. This may be true in an on-prem situation, but often is not. It will almost never be true in a multi-cloud/hybrid cloud situation.

Often the solution proposed involves maintaining a complicated set of L7 gateway servers:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

This architecture introduces a great deal of administrative complexity. The servers have to be federated together, connectivity between them must be established and maintained, and L7 static routes have to be kept up. As the number of clusters increases, this becomes increasingly challenging.

What if we could get a set of workloads, no matter where they are running, to share a common flat L3 domain:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

The green pods could reach each other over a flat L3 Domain.

The red pods could reach each other over a flat L3 Domain.

The red and green pod could reach both the green pods and the red pods in the green (and red respectively) flat L3 Domains.

This points the way to a solution to the problem of stretching a single service mesh with a single control plane across workloads running in different clusters/clouds/premises, etc.:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

An instance of Istio could be run over the red vL3, and a separate Istio instance could be run over the green vL3.

Then the red pods are able to access the red Istio instance.

The green pods are able to access the green Istio instance.

The red/green pod can access both the red and the green Istio instances.

The same could be done with the service mesh of your choice (such as Linkerd, Consul, or Kuma).

Network Service Mesh benefits


Network Service Mesh itself does not provide traditional L7 Services. It provides the complementary service of flat L3 domain that individual workloads can connect to so that the traditional service mesh can do what it does *better* and more *easily* across a broader span.

Network Service Mesh also enables other beneficial and interesting patterns. It allows for multi-service mesh, the capability for a single pod to connect to more than one service mesh simultaneously.

And it allows for “multi-corp extra-net:” it is sometimes desirable for applications from multiple companies to communicate with one another on a common service mesh. Network Service Mesh has sophisticated identity federation and admissions policy features that enable one company to selectively admit the workloads from another into its service mesh.

Source: cisco.com

Monday, 16 May 2022

Get Ready to Crack Cisco 500-301 CCS Exam with 500-301 Practice Test

Cisco 500-301 CCS Exam Description:

The Cisco Cloud Collaboration Solutions (CCS) exam (500-301) is a 60-minute, 45-55 question assessment that tests a candidate's knowledge of the technical skills needed by a sales engineer to design and sell Cisco cloud collaboration solutions.

Cisco 500-301 Exam Overview:

  • Exam Name- Cisco Cloud Collaboration Solutions
  • Exam Number- 500-301 CCS
  • Exam Price- $300 USD
  • Duration- 60 minutes
  • Number of Questions- 45-55
  • Passing Score- Variable (750-850 / 1000 Approx.)
  • Recommended Training- Cisco SalesConnect
  • Exam Registration- PEARSON VUE
  • Sample Questions- Cisco 500-301 Sample Questions
  • Practice Exam- Cisco Video Collaboration Practice Test

Saturday, 14 May 2022

What is Container Scanning (And Why You Need It)

I want to share my experience using vulnerability scanners and other open-source projects for security. First, we need container scanning to make our app and solution secure and safe. The central concept of container scanning is to scan OS Packages and programming language dependencies. Security scanning helps to detect common vulnerabilities and exposures (CVE). The modern proactive security approach provides integration container scanning in CI/CD pipelines. This approach helps detect and fix vulnerabilities in code, containers, and IaC conf files before release or deployment.

How does it work?

Scanners pull the image from the docker registry and try to analyze each layer. After the first running, scanners will download their vulnerability database.  Then each time after running, the community (security specialist, vendors, etc.) identifies, defines, and adds publicly disclosed cybersecurity vulnerabilities to the catalog. We need to consider that sometimes when you run some scanners on your server or laptop, scanners can take some time to update their database.  

Usually, scanners and other security tools use multiple resources for their database: 

◉ Internal database 

◉ National Vulnerability Database (NVD) 

◉ Sonatype OSS Index 

◉ GitHub Advisories 

◉ Scanners also can be configured to incorporate external data sources (e.g., https://search.maven.org/ )

As a result, we see the output with a list of vulnerabilities, name of components or libraries, Vulnerability ID, Severity level (Unknown, Negligible, Low, Medium, High), and Software Bill of Materials (SBOM) format. Using output, we can see or write in a file in which package version vulnerabilities were fixed. This information can help change/update packages or base the image on the secure one. 

Comparing Trivy and Grype

I chose to compare two different open source vulnerability scanners. Trivy and Grype are comprehensive scanners for vulnerabilities in container images, file systems, and GIT repositories. For the scanning and analytics, I chose the Debian image, as it’s more stable for production (greetings to alpine).  

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of the Grype output

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of the Trivy output

A couple advantages of Trivy is that 1) it can scan Terraform conf files, and 2) it’s output format (by default as a table output) is better due to colored output and table cells abstract with link to total vulnerabilities description.

Both projects can write output in JSON and XML using templates. This is beneficial in integrating scanners in CI/CD, or using the report for another custom workflow. However, information from Trivy looks more informative due to the vulnerability abstract and extra links with descriptions.

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of Trivy output JSON

Additional features


◉ You can scan private images and ​self-hosted container registries.

◉ Filtering vulnerabilities is a feature for both projects. Filtering can help highlight critical issues or find specific vulnerabilities by ID. In the latest case where many security specialists, DevOps searching CVE-2021–44228 (Log4j) connected with a common Java logging library, that will also be reused in many other projects.

◉ You can integrate vulnerabilities scanners in Kubernetes

◉ Trivy kubectl plugin allows scan images running in a Kubernetes pod or deployment.

KubeClarity


There is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities called KubeClarity. It scans both runtime K8s clusters and CI/CD pipelines for enhanced software supply chain security.

KubeClarity vulnerability scanner integrates with the scanners Grype (that we observed above) and Dependency-Track.

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
KubeClarity Dashboard

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
KubeClarity Dashboard

Based on my experience, I saw these advantages in KubeClarity:

◉ Useful Graphical User Interface
◉ Filtering features capabilities:
    ◉ Packages by license type
    ◉ Packages by name, version, language, application resources
    ◉ Severity by level (Unknown, Negligible, Low, Medium, High)
    ◉ Fix Version

Source: cisco.com

Thursday, 12 May 2022

Latest Innovations in Cisco DNA Software for Wireless

Cisco DNA Software for Wireless, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Preparation Exam

Cisco has continued to deliver on its promise of innovation in our Cisco DNA software for Wireless subscription. Networking demands are increasing and trends in technology are changing, like the need for a safe and productive hybrid work environment. By deploying the latest innovations in Cisco DNA Advantage software for Wireless along with Cisco DNA Center, you can provide your workforce with improved wireless stability, performance, and security. This leads to increased worker productivity, no matter where they are working from.

What’s new?

Wireless 3D Analyzer: Gain a completely new perspective of the typically invisible Wi-Fi radio frequency (RF). 2D maps that show AP placement on the floor and how RF is propagated from a top-down view no longer cut it because we live in a 3D world. As a network provider, in order to ensure that there is proper wireless coverage in every floor and building, you would need the ability to view wireless RF at different angles in order to discover and resolve RF coverage holes. The wireless 3D map solves these issues by creating an immersive experience that accurately replicates your floor map and all obstacles. This is an incredible addition to our monitoring and network deployment feature set.

Cisco DNA Software for Wireless, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Preparation Exam
Figure 1: Wireless 3D Analyzer

AI-Enhanced RRM: Leverage artificial intelligence to optimize your wireless performance. Traditional radio resource management (RRM) does not consider trends in usage and critical work hours during the day. Radio optimizations are reacting to static threshold alarms as they occur. RRM doesn’t consider the dynamic properties of a wireless network – like the addition of cubicles, furniture, more devices, interference etc. AI Enhanced RRM evaluates two weeks worth of RF data with artificial intelligence to discover patterns and then proactively optimize your wireless before issues occur. This leads to stable wireless connectivity leading to consistent end user experience.

Cisco DNA Software for Wireless, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Preparation Exam
Figure 2: AI-Enhanced RRM

AP Performance Advisories: As your wireless network grows to dozens or hundreds of access points,  underperforming access points can easily go unnoticed. AP Performance Advisories uses machine learning to measure and benchmark client experience parameters across all of your access points. It then flags any underperformers and lists them on the advisory dashboard. This helps identify and isolate poor-performing APs based on end-user experience and enables proactive AP performance optimization efforts to maintain client experience. You can monitor KPIs for these poor-performing APs and investigate further. You can get a view of the top 3 poor-performing APs in a screenshot helping to prioritize which ones to troubleshoot.

Cisco DNA Software for Wireless, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Preparation Exam
Figure 3: AP Performance Advisories

Intelligent Capture: Resolve even the most difficult wireless issues with technical insight into metrics from both a client and access point perspective. It provides support for a direct communication link between Cisco DNA Center and access points, so each of the APs can communicate with Cisco DNA Center directly. Using this channel, Cisco DNA Center can receive packet capture (PCAP) data, AP and client statistics, and spectrum data, allowing you to access data from APs that is not available from wireless controllers.

Cisco DNA Software for Wireless, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Preparation, Cisco Preparation Exam
Figure 4: Intelligent Capture

How can I get these features and more?


If you already have a Cisco DNA Advantage subscription in Wireless along with Cisco DNA Center, you will get to utilize these features at no additional cost to you.

If you do not have a Cisco DNA Advantage subscription or if you have a Cisco DNA Essentials subscription, the time to upgrade is now. We will continue to innovate and add more wireless features to our advantage tier.

Source: cisco.com

Tuesday, 10 May 2022

Transform your SD-WAN with IOS-XE

SD-WAN, IOS-XE, Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Networking

Imagine driving a car in a crowded city where you have never been before. You suddenly find that the car does not have any rear-view or side-view mirrors. How traumatic could that be!

Now contrast that with a familiar road. The one which you know like the back of your hand. You probably never look out for those mirrors. Your brain remembers each nook and corner of that street. You can process all real-time information, correlate it to events and experiences from the past, and make the right decisions instantaneously.

Now let’s look at this from a WAN connectivity standpoint.

Much like the example above, IT specialists are navigating the changes in the world of WAN connectivity in a similar fashion. They value their familiarity with deploying Cisco’s WAN technologies and their experience with Cisco IOS-XE. At the same time, they are aware of evolving business requirements and emerging use cases.

As they chart their course through these unfamiliar waters, it is reassuring to know that Cisco SD-WAN, powered by Cisco IOS-XE, provides multi-cloud access, end-to-end analytics, and application optimization—all on a secure access service edge (SASE) enabled architecture.

Evolution of SD-WAN

WAN connectivity has evolved from merely a way of connecting branches to applications running in data centers. This evolution opens up opportunities for enterprises and organizations alike to determine what a software-defined WAN should look like. Our definition is built on our vast experience deploying such networks with customers around the world, across various industries and verticals.

As customers plan their network evolution, Cisco IOS-XE becomes a familiar innovative engine that addresses the challenges posed by today’s world.

SD-WAN, IOS-XE, Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Career, Cisco Exam Prep, Cisco Networking
Figure 1. IOS-XE Differentiation and Benefits

The innovation does not limit itself to the software. Cisco also addresses the scale and performance requirements of today’s demanding networks with our award-winning Cisco® Catalyst® 8000 Edge Platforms. Specifically designed for SD-WAN, the Cisco Catalyst Edge Platforms Family provides a flexible, scalable, and secure WAN edge for business-first resiliency and cloud-native agility. What’s more, they offer industry-leading interface flexibility, performance, as well as the ability to host services at scale.

How can Cisco help you?


Updating to the latest innovations of Cisco SD-WAN platforms will ensure that customers stay ahead of the game to drive business growth and success, and provide an exceptional user experience. Cisco provides assistance to update your WAN infrastructure.

Resources available to our customers include:

◉ Template conversion & migration tools as well as validation set-ups on DCloud

◉ Documentation and training guiding customers on best practices and use-case-based scenarios and examples.

◉ Design, consultation, and implementation services are offered by CX.

◉ Mentored Install (MINT) services by our certified

Upgrade today and save!


If you have existing Cisco vEdge Routers or Cisco 1100x Series Integrated Services Routers (ISR) running Viptela OS, receive up to 30% off Cisco DNA subscriptions and selected Cisco Catalyst 8000 Edge Platforms Family and Cisco ISR 1000 Series routers.

Source: cisco.com

Sunday, 8 May 2022

Using CI/CD Pipelines for Infrastructure Configuration and Management

Continuous Integration/Continuous Delivery, or Continuous Deployment, pipelines have been used in the software development industry for years. For most teams, the days of manually taking source code and manifest files and compiling them to create binaries or executable files and then manually distributing and installing those applications are long gone. In an effort to automate the build process and distribution of software as well as perform automated testing, the industry has continuously evolved towards more comprehensive pipelines. Depending on how much of the software development process is automated, pipelines can be categorized into different groups and stages:

◉ Continuous Integration is the practice of integrating code that is being produced by developers. On medium to large software projects is common to have several developers or even several teams of developers work on different features or components at the same time. Taking all this code and bringing it to a central location or repository is regularly done using a git based version control system. When the code is merged into a branch on an hourly, daily, weekly or whatever the cadence of the development team is, simple to complex tests can be setup to validate the changes and flush out potential bugs at a very early stage. When performed in an automated fashion, all these steps consist in a continuous integration pipeline.

◉ Continuous Delivery takes the pipeline to the next level by adding software building and release creation and delivery. After the software has been integrated and tested in the continuous integration part of the pipeline, continuous delivery adds additional testing and has the option to deploy the newly built software packages in a sandbox or stage environment for close monitoring and additional user testing. Similar to continuous integration, all steps performed in the continuous delivery part of the pipeline are automated.

◉ Continuous Deployment takes the pipeline to its next and last level. By this stage, the application has been integrated, tested, built, tested some more, deployed in a stage environment and tested even more. The continuous deployment stage takes care of deploying the application in the production environment. Several different deployment strategies are available with different risk factors, cost considerations and complexity. For example, in the basic deployment model, all application nodes are updated at the same time to the new version. While this deployment model is simple it is also the riskiest, it is not outage-proof and does not provide easy rollbacks. The rolling deployment model as the name suggests takes an incremental approach to updating the application nodes. A certain number of nodes are updated in batches. This model provides easier rollback, it is less risky than the basic deployment but at the same time requires that the application runs with both new and old code at the same time. In applications that use the micro-services architecture, this last requirement must be given extra attention. Several other deployment models are available, including canary, blue/green, A/B, etc.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs
The CI/CD pipeline component of GitLab CE

Why use CI/CD pipelines for infrastructure management


Based on the requirements of the development team, software development pipelines can take different forms and use different components. Version control systems are usually git based these days (github, gitlab, bitbucket, etc.). Build and automation servers such as Jenkins, drone.io, Travis CI, to name just a few, are also popular components of the pipeline. The variety of options and components make the pipelines very customizable and scalable

CI/CD pipelines have been developed and used for years and I think it is finally time to consider them for infrastructure configuration and management. The same advantages that made CI/CD pipelines indispensable from any software development enterprise apply also to infrastructure management. Those advantages include:

◉ automation at the forefront of all steps of the pipeline

◉ version control and historical insight into all the changes

◉ extensive testing of all configuration changes

◉ validation of changes in a sandbox or test environment prior to deployment to production

◉ easy roll-back to a known good state in case an issue or bug is introduced

◉ possibility of integration with change and ticketing systems for true infrastructure Continuous Deployment

I will demonstrate how to use Gitlab CE as a foundational component for a CI/CD pipeline that manages and configures a simple CML simulated network. Several other components are involved as part of the pipeline:

pyATS for creating and taking snapshots of the state of the network both prior and after the changes have been applied
◉ Ansible for performing the configuration changes
◉ Cisco CML to simulate a 4 node network that will act as the test infrastructure

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs
Simple network simulation in Cisco CML

Stay tuned for a deeper dive


Next up in this blog series we’ll dive deeper into Gitlab CE, and the CI/CD pipeline component.

Source: cisco.com

Saturday, 7 May 2022

Perspectives on the Future of Service Provider Networking: Evolved Connectivity 

The digital transformation in this decade is demanding more from the network. Multi-cloud, edge, telework, 5G, and IoT are creating an evolved connectivity ecosystem characterized by highly distributed elements needing to communicate with one another in a complex, multi-domain, many-to-many fashion. The world of north-south, east-west traffic flows is quickly disappearing. The evolved connectivity demand is for more connections from more locations, to and from more applications, with tighter Service Level Agreements (SLAs) and involving many, many more endpoints.

Further, enterprises are moving data closer to the sources consuming it and are distributing their applications to drive optimized user experiences. All these new digital assets connect and interact across multiple clouds (private, hybrid, public, and edge).

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation Exam

• 70-80% of large enterprises are working toward executing a multi-cloud strategy
• The number of devices requiring communications will continue to grow
- IoT devices will account for 50% (14.7 billion) of all global networked devices by 2023
- Mobile subscribers will grow from 66% of the global population to 71% of the global population by 2023
• More applications and data requiring network connectivity in new places
- More than 50% of all workloads run outside the enterprise data center
- 90% of all applications support microservices architectures, enabling distributed deployments
• STL Partners’ forecast of the capacity of network edge computing estimates around 1,600 network edge data centers and 200,000 edge servers in 55 telco networks by 2025

Today’s service provider transport network finds itself on a collision course with this evolved connectivity ecosystem. The network is highly heterogeneous, spanning access, metro, WAN, and data center technologies. Stitching these silos together leads to an explosion of complexity and policy state in the network that exists simply to make the domains interoperate. The resulting architecture is burdened with a built-in complexity tax on operations, which hampers operator agility and innovation. As application and endpoint connectivity requirements become increasingly decentralized with their functionality and data deployed across multiple domains, the underlying network is proving too rigid to adapt quickly enough. The status quo has become a complex connectivity mélange with application experience entrusted to network overlays running over best-effort IP, and innovation moves out of the network domain.

Our position: the network should operate like the cloud


As network providers, it’s time we started thinking like cloud providers. From the cloud provider’s perspective, their data centers are simply giant resource pools for their customers’ applications to dynamically consume to perform computing and storage work. Like the cloud, we should instead think of the network as a resource pool for on-demand connectivity services like segmentation, security, or SLA. This resource pool should be built on three key principles:

1. Minimize the capital and operational cost per forwarded Gb
2. Maximize the value the network provides per forwarded Gb (the value from the perspective of the application itself)
3. Eliminate friction or other barriers to applications consuming network services

The cloud operators simplify their resource pool as much as possible and ruthlessly standardize everything from data center facilities down through hardware, programmable interfaces, and infrastructure like hypervisors and container orchestration systems. All the simplification and standardization mean less cost to build, automate, and operate the infrastructure (Principle 1). More importantly, simplification means more resources to invest in innovation (Principle 2). The entire infrastructure can then be abstracted as a resource pool and presented as a catalog of services and APIs for customers’ applications to consume (Principle 3).

Our colleague Emerson Moura’s post later in this series focuses specifically on network simplification, however, we want to spend some time on the subject through the evolved connectivity and cloud provider lens. With connectivity spanning across domains, the most fundamental thing we can do is to standardize end-to-end on a common data plane to minimize the stitching points between edge, data center, cloud, and transport networks. We refer to this as the Unified Forwarding Paradigm (UFP).

A common forwarding architecture allows us to simplify elsewhere such as IPAM, DNS, and first-hop security. Consistent network connectivity means fewer moving parts for operations as all traffic transiting edge, data center, and cloud would follow common forwarding behaviors and be subject to common policies and tools for filtering and service chaining. And there’s a bonus in common telemetry metrics as well!

Our UFP recommendation is to adopt SRv6 wherever possible and ultimately IPv6 end-to-end. This common forwarding architecture provides a foundation for unified, service-aware forwarding across all network domains and includes familiar services like VPNs (EVPN, etc.) and traffic steering. More importantly, connectivity services may become software-defined. Moving to a UFP will lead to a massive reduction in friction and the network can make a true transition from configuration-centric to programmable, elastic, and on-demand. Imagine network connectivity services like pipes into the cloud or some edge environment moving to a demand-driven consumption model. Businesses no longer need to wait for operators to provision the network service. Operators would expose services via APIs for applications and users to consume in the same manner we consume VMs in the cloud: “I need an LSP/VPN to edge-zone X and I need it for two hours.” And as user and application behaviors change and require updates to the services they’re subscribed to, the change is executed via software and the network responds almost immediately.

The relationship between network overlay and underlay will also benefit from standardizing on SRv6/IPv6 and SDN. Today the overlay network is only as good as the underlay serving it. With a unified forwarding architecture and on-demand segment routing services, an SD-WAN system could directly access and consume underlay services for improved quality of experience. For flows that are latency-sensitive, the overlay network would subscribe to an underlay behavior that ensures traffic is delivered as fast as possible without delays. For the overlay networks, the SRv6 underlay that is SDN controlled provides a richer connectivity experience.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation Exam

Conclusion: from ‘reachability’ to ‘rich connectivity’


Rich connectivity means the network is responsive to the user or application experience and does so in a frictionless manner. It means network overlays can subscribe to underlay services and exert granular control over how their traffic traverses the network. Rich connectivity means applications can dynamically consume low latency or lossless network services, or access security services to enable a zero-trust relationship with other elements they may need to interact with.

We believe service providers who adopt the Unified Forwarding Paradigm and embrace SDN-driven operations and consumption-based rich connectivity service models will transform themselves into platforms for innovation.

Source: cisco.com