Tuesday, 7 June 2022

Implementing Infrastructure as Code- How NDFC works with Ansible and Terraform

Automation has been the focus of interest in the industry for quite some time now. Out of the top tools available, Ansible and Terraform have been popularly used amongst automation enthusiasts like me. While Ansible and Terraform are different in their implementation, they are equally supported by products from the Cloud Networking Business Unit at Cisco (Cisco ACI, DCNM/NDFC, NDO, NXOS). Here, we will discuss how Terraform and Ansible work with Nexus Dashboard Fabric Controller (NDFC). 

First, I will explain how Ansible and Terraform works, along with their workflow. We will then look at the use cases. Finally, we will discuss implementing Infrastructure as Code (IaC).

Ansible – Playbooks and Modules

For those of you that are new to automation, Ansible has two main parts – the inventory file and playbooks. The inventory file gives information about the devices we are automating including any sandbox environments set up. The playbook acts as the instruction manual for performing tasks on the devices declared in the inventory file. 

Ansible becomes a system of documentation once the tasks are written in a playbook. The playbook leverages REST API modules to describe the schema of the data that can be manipulated using Rest API calls. Once written, the playbook can be executed using the ansible-playbook command line.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Ansible Workflow

Terraform – Terraform Init, Plan and Apply


Terraform has one main part – the TF template. The template will contain the provider details, the devices to be automated as well as the instructions to be executed. The following are the 3 main points about terraform:

1. Terraform defines infrastructure as code and manage the full lifecycle. Creates new resources, manages existing ones, and destroys ones no longer necessary. 

2. Terraform offers an elegant user experience for operators to predictably make changes to infrastructure.

3. Terraform makes it easy to re-use configurations for similar infrastructure designs.

While Ansible uses one command to execute a playbook, Terraform uses three to four commands to execute a template. Terraform Init checks the configuration files and downloads required provider plugins. Terraform Plan allows the user to create an execution plan and check if the execution plan matches the desired intent of the plan. Terraform Apply applies the changes, while Terraform Destroy allows the user to delete the Terraform managed infrastructure.

Once a template is executed for the first time, Terraform creates a file called terraform.state to store the state of the infrastructure after execution. This file is useful when making mutable changes to the infrastructure. The execution of the tasks is also done in a declarative method. In other words, the direction of flow doesn’t matter. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Terraform Workflow

Use Cases of Ansible and Terraform for NDFC


Ansible executes commands in a top to bottom approach. While using the NDFC GUI, it gets a bit tedious to manage all the required configuration when there are a lot of switches in a fabric. For example, to configure multiple vPCs or to deal with network attachments for each of these switches, it can get a bit tiring and takes up a lot of time. Ansible uses a variable in the playbook called states to perform various activities such as creation, modification and deletion which simplifies making these changes. The playbook uses the modules we have depending on the task at hand to execute the required configuration modifications. 

Terraform follows an infrastructure as code approach for executing tasks. We have one main.tf file which contains all the tasks which are executed with a terraform plan and apply command. We can use the terraform plan command for the provider to verify the tasks, check for errors and a terraform apply executes the automation. In order to interact with application specific APIs, Terraform uses providers. All Terraform configurations must declare a provider field which will be installed and used to execute the tasks. Providers power all of Terraform’s resource types and find modules for quickly deploying common infrastructure configurations. The provider segment has a field where we specify whether the resources are provided by DCNM or NDFC.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Ansible Code Example (Click to view full size)

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Terraform Code Example (Click to view full size)

Below are a few examples of how Ansible and Terraform works with NDFC. Using the ansible-playbook command we can execute our playbook to create a VRF and network. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Below is a sample of how a Terraform code execution looks: 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Infrastructure as Code (IaC) Workflow 


Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central
Infrastructure as a Code – CI/CD Workflow

One popular way to use Ansible and Terraform is by building it from a continuous integration (CI) process and then merging it from a continuous delivery (CD) system upon a successful application build:

◉ The CI asks Ansible or Terraform to run a script that deploys a staging environment with the application.

◉ When the stage tests pass, CD then proceeds to run a production deployment.

◉ Ansible/Terraform can then check out the history from version control on each machine or pull resources from the CI server.

An important benefit that is highlighted through IaC is the simplification of testing and verification. CI rules out a lot of common issues if we have enough test cases after deploying on the staging network. CD automatically deploys these changes onto production with just a simple click of a button. 

While Ansible and Terraform have their differences, NDFC supports the automation through both software equally and customers are given the option to choose either one or even both.

Terraform and Ansible complement each other in the sense that they both are great at handling IaC and the CI/CD pipeline. The virtualized infrastructure configuration remains in sync with changes as they occur in the automation scripts. 

There are multiple DevOps software alternatives out there to handle the runner jobs. Gitlab, Jenkins, AWS and GCP to name a few. 

In the example below, we will see how GitLab and Ansible work together to create a CI/CD pipeline. For each change in code that is pushed, CI triggers an automated build and verify sequence on the staging environment for the given project, which provides feedback to the project developers. With CD, infrastructure provisioning and production deployment is ensured once the verify sequence through CI has been successfully confirmed. 

As we have seen above, Ansible works in similar way to a common line interpreter, we define a set of commands to run against our hosts in a simple and declarative way. We also have a reset yaml file which we can use to revert all changes we make to the configuration. 

NDFC works along with Ansible and the Gitlab Runner to accomplish a CI/CD Pipeline. 

Gitlab Runner is an application that works with Gitlab CI/CD to run jobs in a pipeline. Our CI/CD job pipeline runs in a Docker container. We install GitLab Runner onto a Linux server and register a runner that uses the Docker executor. We can also limit the number of people with access to the runner so Pull Requests (PRs) of the merge can be raised and approved of the merge by a select number of people. 

Step 1: Create a Repository for the staging and production environment and an Ansible file to keep credentials safe. In this, I have used the ansible vault command to store the credentials file for NDFC.

Step 2: Create an Ansible file for resource creation. In our case, we have one main file for staging and production separately followed by a group_vars folder to have all the information about the resources. The main file pulls the details from the group_vars folder when executed.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Step 3: Create a workflow file and check the output.

As above, our hosts.prod.yml and hosts.stage.yml inventory files act as the main file for implementing resource allocation to both production and staging respectively. Our group_vars folder contains all the resource information including fabric details, switch information as well as overlay network details. 

For the above example, we will be showing how adding a network to the overlay.yml file and then committing this change will invoke a CI/CD pipeline for the above architecture. 

Optional Step 4: Create a password file (Optional). Create a new file called password.txt containing the ansible vault password to encrypt and decrypt the Ansible vault file.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Our overlay.yml file currently has 2 networks. Our staging and production environment has been reset to this stage. We will now add our new network network_db to the yaml file as below:

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

First, we make this change to the staging by raising a PR and once it has been verified, the admin of the repo can then approve this PR merge which will make the changes to production. 

Once we make these changes to the Ansible file, we create a branch under this repo to which we commit the changes.

After this branch has been created, we raise a PR request. This will automatically start the CI pipeline.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Once the staging verification has passed, the admin/manager of the repo can go ahead and approve of the merge which kicks in the CD pipeline for the production environment.

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

If we check the NDFC GUI, we can find both staging and production contain the new network network_db. 

Cisco Data Center, Cisco, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco News, Cisco Central

Source: cisco.com

Thursday, 2 June 2022

SecureX and Secure Firewall: Integration and Automation to Simplify Security

Cisco Secure Firewall stops threats faster, empowers collaboration between teams, and enables consistency across your on-premises, hybrid, and multi-cloud environments. With an included entitlement for Cisco SecureX, our XDR and orchestration platform, you’ll experience efficiency at scale and maximize your productivity. New streamlined Secure Firewall integrations make it easier to use SecureX capabilities to increase threat detection, save time and provide the rapid and deeper investigations you require. These new features and workflows provide the integration and automation to simplify your security.

Move to the Cloud

The entire suite of Firewall Management Center APIs is now available in the cloud. This means that existing APIs can now be executed from the cloud. Cisco makes this even easier for you by delivering fully operational workflows as well as pre-built drag-n-drop code blocks that you can use to craft your own custom workflows. SecureX is able to proxy API calls from the cloud to the SSE connector embedded in the FMC codebase. This integration between Firewall 7.2 and SecureX provides your Firewall with modern cloud-based automation.

Expedited Integration

We’ve dramatically reduced the amount of time needed to fully integrate Firewall into Securex. Even existing Firewall customers who use on-premises Firewall Management Center will be able to upgrade to version 7.2 and start automating/orchestrating in under 15 minutes — a huge time savings! The 7.2 release makes the opportunities for automating your Firewall deployment limitless with our built-in low code orchestration engine.

Previously Firewall admins had to jump through hoops to link their smart licensing account with SecureX which resulted in a very complicated integration process. With the new one-click integration, simply click “Enable SecureX” in your Firewall Management Center and log into SecureX. That’s it! Your Firewalls will automatically be onboarded to SecureX.

SecureX, Secure Firewall, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Tutorial and Material

Built In Orchestration


Cisco Secure Firewall users now get immense value from SecureX with the orchestration capability built natively into the Firewall. Previously Firewall admins would have to deploy an on-premises virtual machine in vCenter to take advantage of Firewall APIs in the cloud which was a major hurdle to overcome. With the 7.2 release, orchestration is built right into your existing Firewall Management Center. There is no on-premises connector required; SecureX orchestration is able to communicate directly with Firewall APIs highlighting the power of Cisco-on-Cisco integrations.

Customizable Workflows


PSIRT Impact monitoring  

The PSIRT impact monitoring workflows helps customers streamline their patch management process to ensure their network is always up to date and not vulnerable to CVE’s. This workflow will check for new PSIRTs, determine if device versions are impacted, and suggest a fixed version to upgrade to. By scheduling this workflow to run once a week customers can be notified via email if there is any potential impact from a PSIRT.

Firewall device health monitoring  

This workflow will run every 15 minutes to pull a health report from FMC and proactively notify customers via email if any devices are unhealthy. This means customers can rest assured that their fleet of devices is operating as expected or be notified of things like high CPU usage, low disk space, or interfaces going down.

Expiry notification for time-based objects 

This workflow highlights the power of automation and showcases what is possible by using the orchestration proxy to use FMC API’s. Managing policy is always an on-going effort but can be made easier by introducing automation. This workflow can be run once a week to search through Firewall policies and determine if any rules are going to expire soon. This makes managing policy much easier because customers will be notified before rules expire and can make changes accordingly.

Response Action: Block URL in access control policy 

This workflow is a one-click response action available from the threat response pivot menu. With the click of a button a URL is added to an object in a block rule of your access control policy. This action can be invoked during an investigation in SecureX or from any browser page using the SecureX browser extension. Reducing time to remediation is a critical aspect of keeping your business secure. This workflow turns a multi-step policy change into a single click by taking advantage of Secure Firewall’s integration with SecureX.

Proven Results


A recent Forrester Economic Impact Study of Secure Firewall show that deploying these types of workflows in SecureX with Secure Firewall increased operational efficiency.

In fact, SecureX in combination with Secure Firewall helped to dramatically reduce the risk of a material breach. It’s clear that the integration of the two meant a significant time savings for already overburdened teams.

SecureX, Secure Firewall, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Tutorial and Material

We continue to innovate new features and workflows that prioritize the efficacy of your teams and help drive the security resilience of your organization.

Source: cisco.com

Monday, 30 May 2022

[New] 500-701 VID Certification | Get Ready to Crack Cisco Video Infrastructure Design Exam

Cisco Video Infrastructure VID Exam Description:

This exam tests a candidate's knowledge of the skills needed by a systems engineer to understand a Cisco Video Collaboration Solution.

Cisco 500-701 VID Exam Overview:

Why CCNA Practice Test is Important for CCNA 200-301 Exam

ccna practice test, ccna exam topics, CCNA Sample Questions, CCNA Test Questions, CCNA 200-301 EXam, ccna 200-301 practice test, CCNA Topics

If you want to propel your career in IT and networking by passing the Cisco Certified Network Associate - CCNA exam, then you have made a smart decision! It gives you complete knowledge of all the concepts and topics. You ought to earn the most sought-after networking certification today by cracking the Cisco CCNA 200-301 exam with the CCNA practice test.

Overview of CCNA 200-301 Exam

Cisco 200-301 is the only exam that applicants should take in order to receive the CCNA certification. The certification covers a broad spectrum of fundamental skills for IT careers, the latest networking developments, software skills, and job functions.

CCNA 200-301 will be a 2-hour closed-book exam. The number of questions is around 90-110, and the exam cost in the US $300. CISCO has split the syllabus into different sections. The CCNA 200-301 exam contains its objectives and sub-topics in it. The CCNA 200-301 exam topics are mentioned below:

  • Network Fundamentals
  • IP Connectivity
  • Network Access
  • IP Services
  • Security Fundamentals
  • Automation and Programmability

All interested applicants should register via Pearson VUE to take the exam, which is the official exam body.

Tips and Tricks to Pass CCNA 200-301 Exam

Many applicants have passed the Cisco certification exams and shared their experiences. To conclude, these are the most periodic and practical suggestions you can consider the following:

Make a study plan. When you determine to take the CCNA 200-301 exam, you should attentively organize your study plan. Depending on the date you will take the exam, you should devote at least 2-3 hours per day to CCNA 200-301 exam preparation. Designate a specific time for studies and select the topics you have to learn during each of them.

1. Study with Updated and Trusted Learning Resources

Cisco’s official training for 200-301 is “Implementing and Administering Cisco Solutions (CCNA) v1.0”. You will find all information on the vendor’s official website. You can take up instructor-led classes (offline or virtual) that incorporate an interactive part taught by a qualified trainer and a self-study course. Also, you can take advantage of e-learning materials if you do not require any guidance.

Must Read: CCNA 200-301 Certification: Reasons Why You Should Get It and How

2. Participate in an Online Cisco Community

This is a superb opportunity to get in touch with former exam-takers and come to know how they have passed the CCNA 200-301 exam successfully. Their guidance is beneficial in organizing your study schedule and deciding whether the CCNA certification is what you require.

3. Attempt CCNA Practice Tests to Complete Your Study Routine

CCNA practice test will help you evaluate your preparedness and competently identify your knowledge gaps. Cisco practice tests provided by NWExam.com mimics the actual exam context, so you will be able to feel the vibe of the actual CCNA 200-301 exam and get familiar with them.

Why Should You Take CCNA Practice Test?

When a large number of reasons reveal the importance of taking a mock test before the real Cisco exam, it’s reasonable to discuss the best reasons to take up the CCNA practice test to get a flying score in CCNA 200-301 exam. Let’s explore:

1) Improved Time Management

In the CCNA practice test, a considerable amount of emphasis is put on time which is definitely one of the essential factors for Cisco exams. Practice tests help you to manage time competently.

2) It Solves the Much-Needed Aspect of the Exam, I.E., Revisions

Since all the complicated things you study are prone to get more intricate at the end of the day as it gets too much to soak up. Thus, revisions can’t be avoided. At this point, the CCNA practice test gives the applicants an opportunity to carry out revisions.

3) CCNA Practice Test Kick Your Confidence Into High Gear

In addition to improving time management skills and performance, you kick your confidence as practice makes you understand the weak and strong topics. Briefly, you build a positive attitude.

For CCNA 200-301 Question and Answer PDF Click Here.

4) Result of CCNA Practice Test Helps

Attempting the CCNA practice test is very beneficial, but you also need to know fair results about where you stand at the end. CCNA practice test solves that purpose. It’s smart to approach a practice test on NWExam.com to equip you with a practical and detailed analysis of your weak and strong areas and useful guidelines.

How Many CCNA Practice Tests Should a Cisco 200-301 Exam Taker Solve?

As suggested by seasoned professionals, there is no definite number that applicants should take up. But, exam-takers should solve as many CCNA practice tests as possible, and there’s no boundary to it. CCNA practice tests help to work on accuracy, increase confidence, and boost speed.

Conclusion

Passing your CCNA 200-301 exam is a thing of tremendous pride. After obtaining the suitable certification, you can directly charge after another Cisco accreditation, stay within your community for more updates or take a break enjoying the resulting perks. Take the given piece of advice, and you’ll be sure to advance your career in networking.

Sunday, 29 May 2022

Enabling Scalable Group Policy with TrustSec Across Networks to Provide More Reliability and Determinism

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation

Cisco TrustSec provides software-defined access control and network segmentation to help organizations enforce business-driven intent and streamline policy management across network domains. It forms the foundation of Cisco Software-Defined Access (SD-Access) by providing a policy enforcement plane based on Security Group Tag (SGT) assignments and dynamic provisioning of the security group access control list (SGACL).

Cisco TrustSec has now been enhanced by Cisco engineers with a broader, cross-domain transport option for network policies. It relies on HTTPS, Representational State Transfer (REST) protocol API, and the JSON file and data interchange format for far more reliable and scalable policy updates and segmentation for more deterministic networks. It is a superior choice over the current use of RADIUS over User Datagram Protocol (UDP), which is notorious for packet drops and retries that degrade performance and service guarantees.

Scaling Policy

Cisco SD-Access, Cisco SD-WAN, and Cisco Application Centric Infrastructure (ACI) have been integrated to provide enterprise customers with a consistent cross-domain business policy experience. This necessitated a more robust, reliable, deterministic, and dependable TrustSec infrastructure to meet the increasing scale of SGTs and SGACL policies―combined with high-performance requirements and seamless policy provisioning and updates followed by assured enforcement.

With increased scale, two things are required of policy systems. 

◉ A more reliable SGACL provisioning mechanism. The use of RADIUS/UDP transport is inefficient for the transport of large volumes of data. It often results in a higher number of round-trip retries due to dropped packets and longer transport times between devices and the Cisco Identity Services Engine (ISE server). The approach is error-prone and verbose.

◉ Determinism for policy updates. TrustSec uses the RADIUS change of authorization (CoA) mechanism to dynamically notify changes to SGACL policy and environmental data (Env-Data). Devices respond with a request to ISE to update the specified change. These are two seemingly disparate but related transaction flows with the common intent to deliver the latest policy data to the devices. In scenarios where there are many devices or a high volume of updates, there is a higher risk of packet loss and out-of-ordering, it is often challenging to correlate the success or failure of such administrative changes.

More Performant, Scalable, and Secure Transport for Policy 

The new transport option for Cisco TrustSec is based on a system of central administration and distributed policy enforcement, with Cisco DNA Center, Cisco Meraki Enterprise Cloud, or Cisco vManage used as a controller dashboard and Cisco ISE serving as the service point for network devices to source SGACL policies and Env-Data (Figure 1).  

Figure 1 shows the Cisco SD-Access deployment architecture depicting a mix of both old and newer software versions and policy transport options. 

Cisco Career, Cisco Preparation, Cisco Guides, Cisco Skills, Cisco Jobs, Cisco Preparation
Figure 1. Cisco SD-Access Deployment Architecture with Policy Download Options

Cisco introduced JSON-based HTTP download for policies to ensure 100% delivery with no packet drops and no retries necessary. It improves the scale, performance, and reliability of policy workflows. Using TLS is also more secure than RADIUS/UDP transport. 

The introduction of the REST API for TrustSec data download is an additional protocol option on devices used to interface with Cisco ISE. Based on the system configuration, either of the transport mechanisms can be used to download environment data (Env-Data) and SGACL policies from Cisco ISE.  

Change of authorization (CoA) is an important functionality on the server to notify updates to network devices.  Cisco ISE continues to use RADIUS CoA, a lightweight message to notify updates to SGACL and Env-Data. In scenarios where there are a high number of devices or a high volume of updates, ISE may experience high CPU utilization due to high volume of CoA requests triggering equal number of CoA responses and follow-up requests from devices eager to update policies. But with the transition of SGACL and Env-data download to the REST protocol, reducing compute and transport time, it indirectly provides better CoA performance.  

In addition to improved reliability and deterministic policy updates, the REST transport interface has also paved the way for better platform assurance and operational visibility. 

The new policy enforcement plane available with Cisco TrustSec provides a broader, cross-domain transport option for network policies. It’s both a more reliable SGACL provisioning mechanism for larger volumes of data and a more deterministic solution for policy updates. The result is more scalable enforcement of business-driven intent and policy management across network domains.

Source: cisco.com

Saturday, 28 May 2022

Automated Service Assurance at Microsecond Speed

For communication service providers (CSPs), the network trends of cloudification, open, software-based infrastructure, and multi-vendor environments are a double-edged sword. On the plus side, these trends break the long tradition of vendor lock-in, freeing service providers to mix best-of-breed solutions that provide competitive advantages.

But with that freedom comes new and daunting responsibilities. It’s now up to CSPs to ensure that all those disparate solutions, APIs, and network functions work together flawlessly. And in the case of mobile networks, operators have two steep learning curves to climb simultaneously: Open RAN and 5G standalone core networks.

Multi-vendor interoperability challenges highlight the need for vendors to collaborate on solutions that are pre-integrated so they’re ready for flawless deployment. This would free CSPs from the time and expense of performing extensive integration and testing — tasks that delay service launches. Pre-integrated, best-of-breed solutions would also deliver faster time to revenue for those new services. Closed-loop automation with tightly integrated network and service orchestration and assurance is the ultimate goal for efficient operations in this new environment.

Another major benefit is confidence that those services will have the performance and quality of experience (QoE) that customers expect. But to maximize that benefit, operators will need real-time, KPI-level insights into those network components across network domains, as well as the services and customer applications running over them. These insights are key to understanding the customer experience and differentiating services with competitive enterprise SLAs.

Automated Assurance and Orchestration that can Handle SLAs at Scale and Speed

By tightly integrating automated assurance and orchestration, Accedian Skylight with Cisco Crosswork enables closed-loop automation based on end user experiences at microsecond speeds. In addition to real-time insights and actions, the solution enables CSPs to return later and configure their network to fix problems or enhance QoE.

Speed is critical because customers — businesses and consumers — notice within seconds when their connection suddenly slows down or is lost. This puts enormous pressure on CSPs to find and fix these problems as they’re emerging, before customers start to notice. That’s a tall order because operators need to do that 24/7/365 at scale: thousands of types of applications and services with tens or hundreds of millions of simultaneous connections now, and even more in the future as the Internet of Things (IoT) becomes even more prevalent.


Service providers need to act at a microsecond level and it’s a tall mountain to climb, but Cisco and Accedian are here to help.

Accedian Skylight and the Cisco Crosswork Automation platform show what happens in every millisecond and enables service providers to automate intervention, stay in control, and deliver assured customer experience in real time.


Insights in real time are driven through the APIs and cloud native carrier-scale Skylight Architecture, simultaneously collecting and correlating critical network performance at an individual packet level sourced from efficient sensors in the network to measure latency and packet loss. When milliseconds matter, Accedian and Cisco automation are mission critical.


Source: cisco.com

Friday, 27 May 2022

Perspectives on the Future of SP Networking: Intent and Outcome Based Transport Service Automation

One lesson we could all learn from cloud operators is that simplicity, ease of use, and “on-demand” are now expected behaviors for any new service offering. Cloud operators built their services with modular principles and well-abstracted service interfaces using common “black box” software programming fundamentals, which allow their capabilities to seamlessly snap together while eliminating unnecessary complexity. For many of us in the communication service provider (CSP) industry, those basic principles still need to be realized in how transport service offerings are requested from the transport orchestration layer.

The network service requestor (including northbound BSS/OSS) initiates an “intent” (or call it an “outcome”) and it expects the network service to be built and monitored to honor that intent within quantifiable service level objectives (SLOs) and promised service level expectations (SLEs). The network service requestor doesn’t want to be involved with the plethora of configuration parameters required to deploy that service at the device layer, relying instead on some other function to complete that information. Embracing such a basic principle would not only reduce the cost of operations but also enable new “as-a-Service” business models which could monetize the network for the operator.

But realizing the vision requires the creation of intent-based modularity for the value-added transport services via well-abstracted and declarative service layer application programming interfaces (APIs).  These service APIs would be exposed by an intelligent transport orchestration controller that acts in a declarative and outcome-based way. Work is being done by Cisco in network slicing and network-as-a-service (NaaS) to define this layer of service abstraction into a simplified – yet extensible – transport services model allowing for powerful network automation.

How we got here


Networking vendors build products (routers, switches, etc.) with an extensive set of rich features that we lovingly call “nerd-knobs”. From our early days building the first multi-protocol router, we’ve always taken great pride in our nerd-knob development. Our pace of innovation hasn’t slowed down as we continue to enable some of the richest networking capabilities, including awesome features around segment routing traffic engineering (SR-TE) that can be used to drive explicit path forwarding through the network (more on that later). Yet historically it’s been left to the operator to mold these features together into a set of valuable network service offerings that they then sell to their end customers. Operators also need to invest in building the automation tools required to support highly scalable mass deployments and include some aspects of on-demand service instantiation. While an atomic-level setting of the nerd knobs allows the operator to provide granular customization for clients or services, this level of service design creates complexity in other areas. It drives very long development timelines, service rigidity, and northbound OSS/BSS layer integration work, especially for multi-domain use cases.

With our work in defining service abstraction for NaaS and network slicing and the proposed slicing standards from the Internet Engineering Task Force (IETF), consumers of transport services can soon begin to think in terms of the service intent or outcome and less about the complexity of setting feature knobs on the machinery required to implement the service at the device level. Transport automation is moving towards intent, outcome, and declarative-based service definitions where the service user defines the what, not the how.

In the discussion that follows, we’ll define the attributes of the next-generation transport orchestrator based on what we’ve learned from user requirements. Figure 1 below illustrates an example of the advantages of the intent-based approach weaving SLOs and SLEs into the discussion. Network slicing, a concept inspired by cellular infrastructure, is introduced as an example of where intent-based networking can add value.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 1. Increased confidence with transport services

What does success look like?


The next-generation transport orchestrator should be closed loop-based and implement these steps:

1. Support an intent-based request to instantiate a new transport service to meet specific SLEs/SLOs

2. Map the service intent into discrete changes, validate proposed changes against available resources and assurance, then implement (including service assurance tooling for monitoring)

3. Operational intelligence and service assurance tools monitor the health of service and report

4. Insights observe and signal out-of-tolerance SLO events

5. Recommended remediations/optimizations determined by AI tooling drawing on global model data and operational insights

6. Recommendations are automatically implemented or passed to a human for approval

7. Return to monitoring mode

Figure 2 shows an example of intent-based provisioning automation. On the left, we see the traditional transport orchestration layer that provides very little service abstraction. The service model is simply an aggregation point for network device provisioning that exposes the many ‘atomic-level’ parameters required to be set by northbound OSS/BSS layer components. The example shows provisioning an L3VPN service with quality of service (QoS) and SR-TE policies, but it’s only possible to proceed atomically. The example requires the higher layers to compose the service, including resource checks, building the service assurance needs, and then performing ongoing change control such as updating and then deleting the service (which may require some order of operations). Service monitoring and telemetry required to do any service level assurance (SLA) is an afterthought and built separately, and it’s not easily integrated into the service itself. The higher layer service orchestration would need to be custom-built to integrate all these components and wouldn’t be very flexible for new services.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 2. Abstracting the service intent

On the right side of Figure 2, we see a next-gen transport service orchestrator which is declarative and intent-based. The user specifies the desired outcome (in YANG via a REST/NETCONF API), which is to connect a set of network endpoints, also called service demarcation points (SDPs) in an any-to-any way and to meet a specific set of SLO requirements around latency and loss. The idea here is to express the service intent in a well-defined YANG-modeled way directly based on the user’s connectivity and SLO/SLE needs. This transport service API is programable, on-demand, and declarative.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 3. IETF slice framework draft definitions

The new transport service differentiator: SLOs and SLEs


So how will operators market and differentiate their new transport service offerings? While posting what SLOs can be requested will certainly be a part of this (requesting quantifiable bandwidth, latency, reliability, and jitter metrics), the big differentiators will be the set of SLE “catalog entries” they provide. SLEs are where “everything else” is defined as part of the service intent. What type of SLEs can we begin to consider? See Table 1 below for some examples. Can you think of some new ones? The good news is that operators can flexibly define their own SLEs and map those to explicit forwarding behaviors in the network to meet a market need.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Table 1. Sample SLE offerings

Capabilities needed in the network


The beauty of intent-based networking is that the approach treats the network as a “black box” that hides detailed configuration from the user. With that said, we still need those “nerd-knobs” at the device layer to realize the services (though abstracted by the transport controller in a programable way). At Cisco, we’ve developed a transport controller called Crosswork Network Controller (CNC) which works together with an IP-based network utilizing BGP-based VPN technology for the overlay connectivity along with device layer QoS and SR-TE for the underlay SLOs/SLEs. We’re looking to continue enhancing CNC to meet the full future vision of networking intent and closed loop.

While BGP VPNs (for both L2 and L3), private-line emulation (for L1), and packet-based QoS are well-known industry technologies, we should expound on the importance of SR-TE. SR-TE will allow for a very surgical network path forwarding capability that’s much more scalable than earlier approaches. All the services shown in Table 1 will require some aspect of explicit path forwarding through the network. Also, to meet specific SLO objectives (such as BW and latency), dictating and managing specific path forwarding behavior will be critical to understanding resource availability against resource commitments. Our innovation in this area includes an extensive set of PCE and SR-TE features such as flexible algorithm, automated steering, and “on-demand-next-hop” (ODN) as shown in Figure 4.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 4. Intent-based SR-TE with Automated Steering and ODN

With granular path control capabilities, the transport controller, which includes an intelligent path computation element (PCE), can dynamically change the path to keep within the desired SLO boundaries depending on network conditions. This is the promise of software-defined networking (SDN), but when using SR-TE at scale in a service provider-class network, it’s like SDN for adults!

Given the system is intent-based, that should also mean it’s declarative. If the user wanted to switch from SLE No.1 to SLE No.2 (go from a “best effort” latency-based service to a lowest latency-based service), then that should be a simple change in the top-level service model request. The transport controller will then determine the necessary changes required to implement the new service intent and only change what’s needed at the device level (called a minimum-diff operation). This is NOT implemented as a complete deletion of the original service and then followed by a new service instantiation. Instead, it’s a modify-what’s-needed implementation. This approach thus allows for on-demand changes which offer the cloud-like flexibility consumers are looking for, including time-of-day and reactionary-based automation.

Even the standards bodies are getting on board


The network slicing concept initially defined by 3GPP TS 23.501 for 5G services as “a logical network that provides specific network capabilities and network characteristics”, was the first to mandate the service in an intent-based way, and to request a service based on specific SLOs. This approach has become a generic desire for any network service (not just 5G) and certainly for the transport domain, most service providers look to the IETF for standards definitions. The IETF is working on various drafts to help vendors and operators have common definitions and service models for intent-based transport services (called IETF Network Slice Services). These drafts include: Framework for IETF Network Slices and IETF Network Slice Service YANG Model.

Cisco, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Career, Cisco Prep, Cisco News, Cisco Certifications
Figure 5. IETF network slice details

Conclusion


We envision a future where transport network services are requested based on outcomes and intents and in a simplified and on-demand fashion. This doesn’t mean the transport network devices will lose rich functionality – far from it. The “nerd-knobs” will still be there! Rich devices (such as VPN, QoS, and SR-TE) and PCE-level functionality will still be needed to provide the granular control required to meet the desired service objectives and expectations, yet the implementation will now be abstracted into more consumable and user-oriented service structures by the intent-based next-gen transport orchestrator.

This approach is consistent with the industry’s requirements on 5G network slicing and for what some are calling NaaS, which is desired by application developers. In all cases, we see no difference in that the service is requested as an outcome that meets specific objectives for a business purpose. Vendors like us are working to develop the proper automation and orchestration systems for both Cisco and third-party device support to realize this future of networking vision into enhanced, on-demand, API-driven, operator-delivered transport services.

Source: cisco.com