Saturday, 25 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 2

AI Spoofing Detection Architecture and Deployment

Our previous blog post, Designing and Deploying Cisco AI Spoofing Detection, Part 1: From Device to Behavioral Model, introduced a hybrid cloud/on-premises service that detects spoofing attacks using behavioral traffic models of endpoints. In that post, we discussed the motivation and the need for this service and the scope of its operation. We then provided an overview of our Machine Learning development and maintenance process. This post will detail the global architecture of Cisco AISD, the mode of operation, and how IT incorporates the results into its security workflow.

Since Cisco AISD is a security product, minimizing detection delay is of significant importance. With that in mind, several infrastructure choices were designed into the service. Most Cisco AI Analytics services use Spark as a processing engine. However, in Cisco AISD, we use an AWS Lambda function instead of Spark because the warmup time of a Lambda function is typically shorter, enabling a quicker generation of results and, therefore a shorter detection delay. While this design choice reduces the computational capacity of the process, that has not been a problem thanks to a custom-made caching strategy that reduces processing to only new data on each Lambda execution.

Global AI Spoofing Detection Architecture Overview

Cisco AISD is deployed on a Cisco DNA Center network controller using a hybrid architecture of an on-premises controller tethered to a cloud service. The service consists of on-premises processes as well as cloud-based components.

The on-premises components on the Cisco DNA Center controller perform several vital functions. On the outbound data path, the service continually receives and processes raw data captured from network devices, anonymizes customer PII, and exports it to cloud processes over a secure channel. On the inbound data path, it receives any new endpoint spoofing alerts generated by the Machine Learning algorithms in the cloud, deanonymizes any relevant customer PII, and triggers any Changes of Authorization (CoA) via Cisco Identity Services Engine (ISE) on affected endpoints.

The cloud components perform several key functions focused primarily on processing the high volume data flowing from all on-premises deployments and running Machine Learning inference.  In particular, the evaluation and detection mechanism has three steps:

1. Apache Airflow is the underlying orchestrator and scheduler to initiate compute functions. An Airflow DAG frequently enqueues computation requests for each active customer to a queuing service.

2. As each computation request is dequeued, a corresponding serverless compute function is invoked. Using serverless functions enables us to control compute costs at scale. This is a highly efficient multi-step, compute-intensive, short-running function that performs an ETL step by reading raw anonymized customer data from data buckets and transforming them into a set of input feature vectors to be used for inference by our Machine Learning models for spoof detection. This compute function leverages some of cloud providers’ common Function as a Service architecture.

3. This function then also performs the model inference step on the feature vectors produced in the previous step, ultimately leading to the detection of spoofing attempts if they are present. If a spoof attempt is detected, the details of the finding are pushed to a database that is queried by the on-premises components of Cisco DNA Center and finally presented to administrators for action.

Figure 1: Schematic view of Cisco AISD cloud and on-premises components.

Figure 1 captures a high-level view of the Cisco AISD components. Two components, in particular, are central to the cloud inferencing functionality: the Scheduler and the serverless functions.

The Scheduler is an Airflow Directed Acyclic Graph (DAG) responsible for triggering the serverless function executions on active Cisco AISD customer data. The DAG runs at high-frequency intervals pushing events into a queue and triggering the inference function executions. The DAG executions prepare all the metadata for the compute function. This includes determining customers with active flows, grouping compute batches based on telemetry volume, optimizing the compute process, etc. The inferencing function performs ETL operations, model inference, detection, and storage of spoofing alerts if any. This compute-intensive process implements much of the intelligence for spoof detection. As our ML models get retrained regularly, this architecture enables the quick rollout—or rollback if needed—of updated models without any change or impact on the service.

The inference function executions have a stable average runtime of approximately 9 seconds, as shown in Figure 2, which, as stipulated in the design, does not introduce any significant delay in detecting spoofing attempts.

Figure 2: Average lambda execution time in milliseconds for all Cisco AISD active customers between Jan 23rd and Jan 30th

Cisco AI Spoofing Detection in Action


In this blog post series, we described the internal design principles and processes of the Cisco AI Spoofing Detection service. However, from a network operator’s point of view, all these internals are entirely transparent. To start using the hybrid on-premises/cloud-based spoofing detection system, Cisco DNA Center Admins need to enable the corresponding service and cloud data export in Cisco DNA Center System Settings for AI Analytics, as shown in Figure 3.

Figure 3: Enabling Cisco AI Spoofing Detection is very simple in Cisco DNA Center.

Once enabled, the on-prem component in the Cisco DNA Center starts to export relevant data to the cloud that hosts the spoof detection service. The cloud components automatically start the process for scheduling the model inference function runs, evaluating the ML spoofing detection models against incoming traffic, and raising alerts when spoofing attempts on a customer endpoint are detected. When the system detects spoofing, the Cisco DNA Center in the customer’s network receives an alert with information. An example of such a detection is shown in Figure 4. In the Cisco DNA Center console, the network operator can set options to execute pre-defined containment actions for the endpoints marked as spoofed: shut down the port, flap the port, or re-authenticate the port from memory.

Figure 4: Example of alert from an endpoint that was initially classified as a printer.

Protecting the Network from Spoofing Attacks with Cisco DNA Center


Cisco AI Spoofing Detection is one of the newest security benefits provided to Cisco DNA Center operators with a Cisco DNA Advantage license. To simplify managing complex networks, AI and ML capabilities are being woven throughout the Cisco network management ecosystem of controllers and network fabrics. Along with the new Cisco AISD, Cisco AI Network Analytics, Machine Reasoning Engine Workflows, Networking Chatbots, Group-Based Policy Analytics, and Trust Analytics are additional features that work together to simplify management and protect network endpoints.

Source: cisco.com

Tuesday, 21 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 1

The network faces new security threats every day. Adversaries are constantly evolving and using increasingly novel mechanisms to breach corporate networks and hold intellectual property hostage. Breaches and security incidents that make the headlines are usually preceded by considerable recceing by the perpetrators. During this phase, typically one or several compromised endpoints in the network are used to observe traffic patterns, discover services, determine connectivity, and gather information for further exploit.

Compromised endpoints are legitimately part of the network but are typically devices that do not have a healthy cycle of security patches, such as IoT controllers, printers, or custom-built hardware running custom firmware or an off-the-shelf operating system that has been stripped down to run on minimal hardware resources. From a security perspective, the challenge is to detect when a compromise of these devices has taken place, even if no malicious activity is in progress.

In the first part of this two-part blog series, we discuss some of the methods by which compromised endpoints can get access to restricted segments of the network and how Cisco AI Spoofing Detection is designed used to detect such endpoints by modeling and monitoring their behavior.

Part 1: From Device to Behavioral Model

One of the ways modern network access control systems allow endpoints into the network is by analyzing identity signatures generated by the endpoints. Unfortunately, a well-crafted identity signature generated from a compromised endpoint can effectively spoof the endpoint to elevate its privileges, allowing it access to previously unauthorized segments of the network and sensitive resources. This behavior can easily slip detection as it’s within the normal operating parameters of Network Access Control (NAC) systems and endpoint behavior. Generally, these identity signatures are captured through declarative probes that contain endpoint-specific parameters (e.g., OUI, CDP, HTTP, User-Agent). A combination of these probes is then used to associate an identity with endpoints.

Any probe that can be controlled (i.e., declared) by an endpoint is subject to being spoofed. Since, in some environments, the endpoint type is used to assign access rights and privileges, this type of spoofing attempt can lead to critical security risks. For example, if a compromised endpoint can be made to look like a printer by crafting the probes it generates, then it can get access to the printer network/VLAN with access to print servers that in turn could open the network to the endpoint via lateral movements.

There are three common ways in which an endpoint on the network can get privileged access to restricted segments of network:

1. MAC spoofing: an attacker impersonates a specific endpoint to obtain the same privileges.

2. Probe spoofing: an attacker forges specific packets to impersonate a given endpoint type.

3. Malware: a legitimate endpoint is infected with a virus, trojan, or other types of malware that allows an attacker to leverage the permissions of the endpoint to access restricted systems.

Cisco AI Spoofing Detection (AISD) focuses primarily on the detection of endpoints employing probe spoofing, most instances of MAC spoofing, and some cases of Malware infection. Contrary to the traditional rule-based systems for spoofing detection, Cisco AISD relies on behavioral models to detect endpoints that do not behave as the type of device they claim to be. These behavioral models are built and trained on anonymized data from hundreds of thousands of endpoints deployed in multiple customer networks. This Machine Learning-based, data-driven approach enables Cisco AISD to build models that capture the full gamut of behavior of many device types in various environments.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 1: Types of spoofing. AISD focuses primarily on probe spoofing and some instances of MAC spoofing.

Creating Benchmark Datasets


As with any AI-based approach, Cisco AISD relies on large volumes of data for a benchmark dataset to train behavioral models. Of course, as networks add endpoints, the benchmark dataset changes over time. New models are built iteratively using the latest datasets. Cisco AISD datasets for models come from two sources.

◉ Cisco AI Endpoint Analytics (AIEA) data lake. This data is sourced from Cisco DNA Center with Cisco AI Endpoint Analytics and Cisco Identity Services Engine (ISE) and stored in a cloud database. The AIEA data lake consists of a multitude of endpoint information from each customer network. Any personally identifiable information (PII) or other identifiers such as IP and MAC addresses—are encrypted at the source before it is sent to the cloud. This is a novel mechanism used by Cisco in a hybrid cloud tethered controller architecture, where the encryption keys are stored at each customer’s controller.
◉ Cisco AISD Attack data lake contains Cisco-generated data consisting of probe and MAC spoofing attack scenarios.

To create a benchmark dataset that captures endpoint behaviors under both normal and attack scenarios, data from both data lakes are mixed, combining NetFlow records and endpoint classifications (EPCL). We use the EPCL data lake to categorize the NetFlow records into flows per logical class. A logical class encompasses device types in terms of functionality, e.g., IP Phones, Printers, IP Cameras, etc. Data for each logical class are split into train, validation, and test sets. We use the train split for model training and the validation split for parameter tuning and model selection. We use test splits to evaluate the trained models and estimate their generalization capabilities to previously unseen data.

Benchmark datasets are versioned, tagged, and logged using Comet, a Machine Learning Operations (MLOps) and experiment tracking platform that Cisco development leverages for several AI/ML solutions. Benchmark Datasets are refreshed regularly to ensure that new models are trained and evaluated on the most recent variability in customers’ networks.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 2: Benchmark Dataset and Data Split Creation

Model Development and Monitoring


In the model development phase, we use the latest benchmark dataset to build behavioral models for logical classes. Customer sites use the trained models. All training and evaluation experiments are logged in Comet along with the hyper-parameters and produced models. This ensures experiment reproducibility and model traceability and enables audit and eventual governance of model creation. During the development phase, multiple Machine Learning scientists work on different model architectures, producing a set of results that are collectively compared in order to choose the best model. Then, for each logical class, the best models are versioned and added to a Model Registry. With all the experiments and models gathered in one location, we can easily compare the performance of the different models and monitor the evolution of the performance of released models per development phase.

The Model Registry is an integral part of our model deployment process. Inside the Model Registry, models are organized per logical class of devices and versioned, enabling us to keep track of the complete development cycle—from benchmark dataset used, hyper-parameters chosen, trained parameters, obtained results, and code used for training. The models are deployed in AWS (Amazon Web Services) where the inferencing takes place. We will discuss this process in our next blog post, so stay tuned.

Production models are closely monitored. If the performance of the models starts degrading—for example, they start generating too many false alerts—a new development phase is triggered. That means that we construct a new benchmark dataset with the latest customer data and re-train and test the models. In parallel, we also revisit the investigation of different model architectures.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 3: Cisco AI Spoofing Detection Model Lifecycle

Next Up: Taking Behavioral Models to Production in Cisco AI Spoofing Detection


In this post, we’ve covered the initial design process for using AI to build device behavioral models using endpoint flow and classification data from customer networks. In part 2 “Taking Behavioral Models to Production in Cisco AI Spoofing Detection” we will describe the overall architecture and deployment of our models in the cloud for monitoring and detecting spoofing attempts.

Source: cisco.com

Monday, 20 March 2023

Top 10 Tips to Pass CCNP Service Provider 350-501 SPCOR Exam

One of the most sought-after certifications in the field is the CCNP Service Provider. It demonstrates basic knowledge while allowing you to tailor the certification to the preferred technical field. This post will discuss the CCNP Service Provider 350-501 SPCOR exam. Your proficiency and expertise with service provider solutions are put to text during this certification exam.

Overview of Cisco 300-501 SPCOR Exam

To achieve CCNP Service Provider certification, you should pass two exams: a core exam and a concentration exam of your choice.

• The core exam, Implementing and Operating Cisco Service Provider Networks Core Technologies v1.0 (300-501 SPCOR ), highlights your knowledge of the core architecture, service provider infrastructure, networking, automation, services, quality of service, security, and network assurance included. This core exam is also a prerequisite for the CCIE Service Provider certification, and passing this Cisco exam helps you earn both certificates.

• The concentration exam focuses on the development and industry-specific topics, like VPN services, advanced routing, and automation.

The Implementing and Operating Cisco Service Provider Network Core Technologies v1.0 (SPCOR 350-501) exam is a 120-minute exam consisting of 90-110 questions. This exam is associated with the CCNP Service Provider, CCIE Service Provider, and Cisco Certified Specialist – Service Provider Core certifications.

The exam covers the following topics:

  • Core architecture
  • Services
  • Networking
  • Automation
  • Quality of services
  • Security
  • Network assurance
  • Proven Tips to Pass the CCNP Service Provider 350-501 SPCOR Exam

    1. Understand the Exam Objectives

    Understanding its objectives is the first and most crucial step toward passing any certification exam. The CCNP Service Provider 350-501 SPCOR Exam tests your knowledge of implementing, troubleshooting, and optimizing service provider VPN services. Therefore, it is essential to have a comprehensive understanding of the exam objectives, which can be found on the official Cisco website.

    2. Get Familiar with the Exam Format

    The CCNP Service Provider 350-501 SPCOR Exam consists of 90-110 questions you must answer within 120 minutes. The exam format includes multiple-choice, drag-and-drop, simulation, and testlet questions. Familiarizing yourself with the exam format will help you manage your time efficiently during the exam.

    3. Study the Exam Topics Thoroughly

    Once you clearly understand the exam objectives and format, it’s time to start studying. The official Cisco website provides a comprehensive list of exam topics you must study to pass. Cover all the topics thoroughly and practice hands-on exercises to reinforce your knowledge.

    Also Read: The Best & Ultimate Guide to Pass CCNP service provider 350-501 SPCOR Exam

    4. Use Official Cisco Study Materials

    The best way to prepare for the CCNP Service Provider 350-501 SPCOR Exam is to use official Cisco study materials. These materials are designed specifically for the exam and provide you with in-depth knowledge of the exam topics. You can also use third-party study materials, but ensure they cover all the exam topics.

    5. Join a Study Group

    Joining a study group is an excellent way to prepare for the CCNP Service Provider 350-501 SPCOR Exam. You can discuss exam topics with your peers, exchange ideas and insights, and get feedback on your progress. You can find study groups online or in your local community.

    6. Practice with Exam Simulators

    Exam simulators are an excellent way to prepare for the CCNP Service Provider 350-501 SPCOR Exam. These simulators simulate the exam environment, including the format and difficulty level of the questions. They also provide instant feedback on your performance, allowing you to identify your strengths and weaknesses.

    7. Take Practice Tests

    Taking practice tests is an essential part of exam preparation. Practice tests not only help you assess your knowledge of the exam topics but also help you get familiar with the exam format. You can find a wide range of practice tests online or in official Cisco study materials.

    8. Manage Your Time Effectively

    Managing your time effectively during the exam is crucial. Read each question carefully, understand what it asks, and allocate your time accordingly. Don’t spend too much time on difficult questions; move on to easier ones and return to the difficult ones later.

    9. Relax and Stay Focused

    It’s normal to feel nervous before and during the exam. However, it’s essential to stay calm and focused. Take deep breaths, clear your mind, and stay focused on the task at hand. Remember, you’ve prepared well, and you have the knowledge and skills required to pass the exam.

    10. Review Your Answers Carefully

    After you have answered all the 350-501 SPCOR exam questions, review all the answers. This is because candidates often get so enthusiastic about being done with an exam that they forget to go back and check their answers. It may seem redundant, but it’s important to double-check their answers. This helps to ensure that each question has been answered completely and thoroughly and that they haven’t made any simple mistakes. 

    Conclusion

    If you study thoroughly for your CCNP Service Provider 350-501 SPCOR exam and follow the tips in this article, you will succeed in this exam. Earning the CCNP Service Provider 350-501 is one of the best ways to advance your career as a network engineer, support engineer, and network technician. Therefore, you can prepare for the exam with the best resources and master the exam concepts, which will help you pass your exam on the first attempt.

    Thursday, 16 March 2023

    Cisco SD-WAN: The Right Tool for Keeping Fleets Moving

    Cisco SD-WAN, Cisco Career, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Tutorial and Materials

    When people think of fleets, they often think of a collection of ships cruising across the sea. Modern-day fleets, however, include public transportation, first responders, and service trucks for utilities, ISPs, and equipment or appliance repair. Gone are the days when fleet activities were solely managed by two-way radio voice communication from a dispatcher.  In the last 20 years, fleets have been reinvented to digitally connect over wide area networks (WANs) while on the move and sending and receiving important information in real-time enabling them to operate efficiently and reliably.  Modern fleets share the need for reliable and available communications, security, and visibility.

    Communication, Reliability, and Visibility


    Whether we are talking about public transportation moving thousands of citizens across the city, first responders racing to the scene of an accident, or service fleets responding to power outages – fleet deployments have a definitive need for secure, reliable network connectivity with high availability that will relay business information in real-time and through the best transport possible in order to maintain operational availability and contain costs.

    Consider the impact of a city bus losing connectivity to computer-aided dispatch and missing route updates or a first responder being unable to access the location of an accident scene. These kinds of scenarios emphasize the criticality of that connectivity being both reliable and available. Just like office applications, these fleet solutions require always-on, reliable WAN connectivity to perform their functions while in motion and while they are wirelessly connected.

    Security as the Backbone of Fleet Management


    Segmentation is critical. Take a city bus for example – several applications are running simultaneously across payment services, predictive maintenance, passenger services, and video analytics. Each requires controlled access so that only the intended persona can access data associated with each service. Using these examples, you’d want only the payment processing company seeing payment information, the maintenance department seeing predictive maintenance data, and security forces seeing video footage. This segmentation enabled by virtual routing and forwarding (VRF) capabilities provides peace of mind for those on the security side while simplifying the day-to-day life for those on the operations side.

    Credentialed access to enterprise networks is also important for service fleets that run applications accessing inventory databases, work order management systems, and even payment systems that are hosted in the enterprise core network. Identity-based security policies on Cisco SD-WAN help ensure that access to sensitive information is automated and scalable to keep the right eyes on the right data.

    Data encryption is critical for first responders, particularly ambulances, due to the additional consideration of patient information. Privacy is necessary to comply with HIPPA guidelines so stricter measures may need to be taken to ensure the data is secure. Cisco SD-WAN provides peace of mind when it comes to the handling of sensitive information as 2048-bit encryption keys, and underlying traffic encrypted using the AES-256 cipher come standard for devices connected through your routers.

    Operational Efficiency – The Brass Ring


    While all fleet categories require reliable and available connectivity as well as security, transit systems, and service fleets are also greatly concerned about operational efficiency. The fact is that most public transit systems operate at a loss and are publicly funded, bringing immense pressure to reduce costs. Service fleets, while typically funded by private industry, are ever vigilant about controlling costs and reducing service call count and duration. Both have the need for automation, consolidation, visibility, and managing airtime costs.

    Cisco SD-WAN is the Answer


    Cisco SD-WAN provides fleet operators with the ability to connect and monitor their fleet vehicles and automate processes while also providing needed visibility, security, and consolidation with the enterprise network.

    The ruggedized industrial routers in the Cisco Catalyst IR Series come SD-WAN ready to meet vehicle environmental conditions. These routers provide reliable and available connectivity through multiple means of transport ranging from 5G and LTE to broadband, Wi-Fi, and even satellite. Cisco SD-WAN also offers configurable failovers between transports to ensure continuous communication. The brain of Cisco SD-WAN, vSmart, can automatically route business critical traffic over high bandwidth links and send lower priority traffic over lower cost links, while the vManage dashboard lets you see it all in real-time.

    Cisco SD-WAN comes standard with a host of security functions that ensure your fleet vehicles and their applications are protected at the same level as the enterprise. By creating template-based policies, onboarding new devices come with ease and helps bridge together your IT and OT teams as well. The main security benefits of using Cisco SD-WAN for fleet management come from:

    ◉ End-to-end segmentation that isolates and protects critical information.
    ◉ Encrypted IPsec tunnels for data privacy.
    ◉ Identity-based policy management for both enterprise and industrial networks.
    ◉ Utilizing Cisco Umbrella for protection against internet-based threats.
    ◉ Security features running directly on the router, including embedded enterprise firewall, IPS, and URL filtering capabilities.

    Cisco SD-WAN provides end-to-end visibility for every application and device across the entire SD-WAN fabric. Between the IoT devices powering your fleet, cloud-native applications used in the office, and every device your employees touch – Cisco SD-WAN provides a consolidated console that is guaranteed to simplify your IT operations. Reliability, availability, security, and visibility are all provided to ensure that your enterprise and fleet vehicles are optimized and protected in any scenario.

    Source: cisco.com

    Tuesday, 14 March 2023

    Perform Web GUI Administration and Configuration with the AXL API

    The AXL Philosophy and Purpose


    We, as programmers, often look at an API with wild dreams about building dazzling user-facing applications that inspire jaw-dropping amazement. That’s just how we’re built. And the AXL API has the power to let you do that.

    One word… DON’T.

    AXL is not an API for user-facing applications. It’s an administration and configuration API. You don’t want to push an end-user application built on AXL to 1,000 users. And if you do, you’re going to have a bad time.

    Think of AXL as a programmatic way to perform web GUI administration and configuration tasks. For example, in the web GUI, you add an end user this way.

    1. Select the User Management menu
    2. Select End User
    3. Click on +Add New
    4. Fill out the form
    5. Save.

    Now, programming that might seem silly and more work than using the web GUI. But think of it this way. You have a text file with a list of names, email addresses, phone numbers, assigned company phone extension and other personal data of new employees. Now you can write an application that reads the file and creates and configures an end-user account for each of the persons and creates and configures lines and phones entries for them. That’s automating an administration and configuration task in a way that makes your life as an administrator easier.

    The Basics


    AXL is a SOAP-based API. There’s no REST for the wicked here.

    The most often used AXL APIs fall into the following groups:

    1. addSomething (e.g., add a phone)
    2. getSomething (e.g., get a phone’s info and settings)
    3. updateSomething (e.g., change a phone’s info and settings)
    4. applySomething (e.g., apply the changes you made for the phone)
    5. removeSomething (e.g., remove a phone)
    6. listSomething (e.g., list all phones)

    There are a few other AXL APIs not in those groups that you’ll need at times, but those are the most frequently used operations.

    Getting Started: Preparation


    The best way to get familiar with AXL is to use a free, open-source tool called SoapUI. SoapUI makes it easy to experiment with the AXL API. But first, you need to download the files you’ll use with SoapUI.

    Log into Call Manager as an administrator. Under the Application menu, select Plugins.


    Click the Find button (not shown in this screen shot). The first item is the Cisco AXL Toolkit. Click on Download and save it somewhere.


    The saved file should look like this:


    Open the zip file to see its contents


    Open the schema directory.


    Pick the version of Call Manager you are using. In this sample, we’ll pick current.


    Copy the three files above to a working directory. I chose C:\SOAP.


    Download and install the open-source SoapUI from this page. You’re done with preparation. Now, it’s time to create an AXL project to play with the API.

    Set Up a SoapUI AXL Project


    Click on the File menu and choose New SOAP Project.


    Pick a name for your project. Set the Initial WSDL to point to the AXLAPI.wsdl file you saved to a working directory earlier. Click OK.


    In the left column, you should see this (assuming you used the name New AXL Test, otherwise look for the name you chose).


    Right click on AXLAPIBinding and select Show Interface Viewer. You should see this Dialog Box.


    Click on the Service Endpoints tab and you’ll see where you can enter information for AXLAPI binding.


    Type what you see in the Endpoint field, except point to your server where it says YOURSERVER. Assuming it’s safe for your work environment to do, enter your Administrator username and password in the appropriate fields. You can create an Administrator account in Call Manager specifically for use with the AXL API, or you can use your primary Administrator account.

    You can close this dialog box now.

    Now let’s play with one of the requests. In the left column, find listPhone and click on its plus sign. Then double-click on Request 1. You should see all the XML for this request pop up in a new dialog.


    The listPhone request has a few potential hangups that are good to learn how to avoid. Any listSomething request is going to return, well, a list of things. Scroll down to the bottom of the request XML and you’ll see these options. These give you the option to skip a number of results, or define the starting point. We don’t want to mess with those options right now, so select them and delete them.


    At the top, look for what I have selected here, select it and delete it. This attribute can be useful, and you don’t always have to delete it, but in this case, you’ll need to remove the ‘sequence=”?”’ for the request to work properly.


    There’s one more thing. Get rid of what you see selected in this screen shot. Select it and delete it.


    There are way too many values to specify, so let’s chop down the request to look like this. Make sure to put a percent sign in the <name></name> tag. This is a wild card, which means it will list ALL the phones. You want to start simple, so this is a simplified listPhone operation.


    Now’s the time to try it out. Click on the green “run” icon in the upper left. You should see the right side of the request change to this:


    This is an unfortunate bug in the current version of SoapUI. It should show you the XML response by default, but it instead shows you raw information. Until the app is fixed, you’ll have to click on the upper left XML tab to view the response.

    The response might look something like this:


    With that, you now have enough basic knowledge to experiment with any of the AXL APIs. Hey now, you’re an all-star, get your game on, go play.

    Programming Tip


    And if you really want to run with the big boys, here’s a tip for running multiple AXL request sequentially. Every time you make an AXL request, Call Manager launches a Tomcat session. When you make many requests in a row, Call Manager will launch multiple Tomcat sessions, which use up CPU and RAM.

    Here’s a way around that. At the bottom of the response, open up the headers and you’ll see a cookie named JSESSIONID and its value.


    If you set the JSESSIONID cookie and use the same value for your next AXL request, Call Manager will re-use the Tomcat session instead of launching a new one.

    What to Avoid and Common Mistakes


    Many requests have a list of optional search parameter tags, commonly <name> and <uuid>. You will usually have to choose one and delete the others.

    As logical as it may seem, you can’t perform a getPhone, change some values, and then copy and paste the modified XML into an updatePhone request. getPhone and updatePhone XML tags are not mirror images.

    Be careful when using APIs that give you direct access to the Call Manager database, like executeSqlQuery. Complicated joins may be clever, but they can also suck up CPU and memory the size of a spy balloon, and that eats into the performance of every other operation.

    Source: cisco.com

    Saturday, 11 March 2023

    Gain Deeper Insights into your Cisco SD-WAN Deployments with NWPI

    Imagine that you’ve built a house and invested time, money, and effort into it for a long time. You are happy that the house is completed to your satisfaction on time and that you and your family have moved in as planned. Living in your own home has never been so satisfying and things are going great. After a few months, you find that there is a water leak in your basement and a tense moment with family members to fix it. You are not a plumber, nor a contractor, and you don’t know the internal details of the pipe, the layout of the walls, or how your architect designed the house. What do you do? You need to hire an expert to first identify the source of the leak, and then spend time and money to fix it—and while waiting for the repairs you have to live with the water continuing to leak.

    But what if you could have a centralized dashboard where you input the location of the seepage, it gives you all the information on the source of the leak, why and how it was caused, whether there were any issues in the architectural design, construction, etc. and a possible solution on how to fix it? Just like any professional architect helping you locate the root cause and faults, IT organizations can derive tremendous value from identifying network issues in their SD-WAN network before there is any impact on users.

    Introducing Network-Wide Path Insights


    Network-Wide Path Insights (NWPI) is a tool natively built into Cisco vManage that helps you find the source of the network issues users are facing from time to time while accessing their applications residing on-prem or in the cloud.  NWPI provides greater visibility and deeper insights into your SD-WAN deployment. It helps enterprises and managed service providers (MSPs)  ensure their network is operating optimally at all times.

    NWPI provides comprehensive analyses of traffic flows in the network with information on applications accessed by users, classification of business-critical flows, monitoring and reporting of network delays, troubleshooting tips, and graphical deep insights into flow analyses.

    Cisco SD-WAN, Cisco Career, Cisco Skill, Cisco Jobs, Cisco Tutorial and Materials, Cisco Guides, Cisco Materials
    Figure 1: Network-Wide Path Insights dashboard

    NWPI gives visual representations of how a packet traverses the network, along with the routing policies that were made while the packet ingresses and exits the router device. It provides visibility and insights into the packet, application, flow, and network level with detailed insights such as network jitter, loss, and latency. It can assist your IT teams with performance analysis, network planning, and troubleshooting. For example, NWPI can provide the best path recommendation for an application.  For example, Webex voice traffic is better off taking the internet as a transport route to reach the destination as opposed to taking a private MPLS link route.

    NWPI monitoring and analyses can be accomplished by triggering a trace for a given set of IP addresses and site IDs in the NWPI UI screen in vManage as shown below in Figure 2:

    Cisco SD-WAN, Cisco Career, Cisco Skill, Cisco Jobs, Cisco Tutorial and Materials, Cisco Guides, Cisco Materials
    Figure 2: NWPI trace creation within Cisco vManage

    When a trace is started, NWPI programs the router at each site to start collecting flow insight data with the filters specified. Your NetOps team can monitor the flow for a particular site ID, a particular VPN, or a particular source and destination IP address. To tune and deploy the policy for interested applications and domains, the DNS Domain discovery knob can be turned on to make effective design decisions before deploying newer sites.

    During the duration of the trace, NWPI constantly monitors the traffic ingressing and exiting the router device based on the filters specified. The device sends the trace which is collected as metadata to the vManage console at constant intervals. vManage correlates data received from multiple routers and data sources for further analyses and reporting. There is little impact on the routers when a trace is started as all the operations are performed in the hardware. The trace thus collected helps you get deeper insights into flows that are traversing the device or network.

    NWPI Integration with Cisco ThousandEyes 


    NWPI can integrate with Cisco ThousandEyes (TE) to gain visibility and insight across WAN networks and ISPs geographically separated. The tool can drill down from TE tests to synthetic flows and display a readout of the packets dropped because of any congestion in the network.

    Cisco SD-WAN, Cisco Career, Cisco Skill, Cisco Jobs, Cisco Tutorial and Materials, Cisco Guides, Cisco Materials
    Figure 3: Network-Wide Path Insights with Cisco ThousandEyes

    In summary, NWPI is an extremely valuable tool built into the vManage GUI to help your IT organization gain deeper insights and more proactively manage your SD-WAN deployment.

    Source: cisco.com

    Thursday, 9 March 2023

    Cisco Demonstrates Co-packaged Optics (CPO) System at OFC 2023

    THE CASE FOR CO-PACKAGED OPTICS:  LOWER POWER


    As network traffic continues to grow due to higher bandwidth applications, such as AI/ML (Artificial Intelligence/Machine Learning), high-resolution video streaming and virtual reality, the strain placed upon data center networks continues to increase.  These insatiable demands for more bandwidth are resulting in higher speed and higher density optics and ASICs.  The aggregate throughput of switch and optics devices is doubling every two to three years, which ultimately results in doubling the speed and increases in power of the host to pluggable optics electrical interface.  Unfortunately, while Moore’s Law continues to allow us to scale our chips and equipment (despite non-trivial technical challenges), its corollary, Dennard Scaling, has been dead for a while.  This means the power required for the new generation of switch and optics semiconductors is higher than the previous one.

    For Cisco’s Webscale data center customers, this has many implications.  To continue scaling a typical data center built around a fixed electrical power budget, both compute and networking must keep up with these new bandwidth demands, but within the same power envelope as before or face an expensive upgrade.

    In the compute space, the requirement to remain within a fixed power budget has forced changes:

    ◉ Movement from single core to lower frequency multicore CPUs (central processing units)
    ◉ Movement away from general purpose CPUs to focused accelerators GPUs (graphics processing units) for applications such as AI/ML inference and training.

    In the networking space, data center topology compromises must occur to remain within the required power envelope, and we must reconsider how we design our equipment.

    ◉ Take a “clean sheet” architectural approach. Cisco’s Silicon One achieves significant power efficiency improvements by rethinking how networking silicon is built.
    ◉ Use the latest silicon process technology to optimize the design
    ◉ Innovate thermal design to reduce the power needed to cool the system
    ◉ Holistically design our silicon, optics, and systems to optimize power consumption and thermal cooling

    However, in our continued quest for innovative designs, we needed to continue to innovate. This is where co-packaged optics (CPO) come in.

    THE THREE PILLARS OF CO-PACKAGING OPTICS


    Pillar #1 – Removal of a Level of DSPs to Save Power

    As switch system speeds and densities have increased, so has the percentage of the system power consumed by front panel pluggable optics. At 25G/lane and faster speeds, the necessity of active DSP-based retimers has driven up system power.

    One of the key innovations of co-packaged optics is to move the optics close enough to the Switch ASIC die to allow removal of this additional DSP (see Figure 1).

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

    Pillar #2 –The Remote Light Source

    In traditional pluggable optics, all sub-components of the optics reside in the pluggable modules. But as optics move closer to the ASIC, the partitioning of the optical components is a critical decision.  Lasers are highly sensitive to high temperature and experience increased failure rates when placed in hotter environments (e.g. adjacent to a very hot switch ASIC).  Moving the lasers away from the high power ASIC to cooler locations within the system chassis results in several improvements:

    1. Lasers can be passively cooled to a lower temperature, enabling them to be more efficient in generating optical power / Watt, lowering system power without active components like a TEC (thermo-electric cooler).

    2. Lasers can be replaced from the chassis faceplate. Since the lasers are the least reliable components of the optics subsystem, making the light source accessible from the system front panel to allow easy insertion and removal is important to ensuring CPO systems have similar MTBF (mean time between failure) to legacy systems.

    3. The industry can standardize on the form factor and design of the remote light source, which allows for multi-vendor sourcing. [Industry standard MSA for ELSFP (External Laser Small Form Factor Pluggable)] Cisco’s demo at OFC is the first system demo to use the industry standard form factor.

    Pillar #3 – Production-Proven Silicon Photonics Platform

    To place optical components very close to the Switch ASIC silicon die, two orders of magnitude (over 100x) of miniaturization is required over existing pluggable modules.  To do this, many previously separate ICs (TIA, driver, modulator, mux/demux) must be combined together on a single IC.  Cisco’s Silicon Photonics technology enables this.

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials
    Figure 2 – 4x OSFP800 vs. Cisco 3.2T CPO module (>100x volume reduction)

    In this era of supply chain challenges, it is important to choose a partner with proven, reliable technology. One of Cisco’s advantages in the CPO space is the experience developing, optimizing, and shipping millions of Silicon Photonics-based optical modules.

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials
    Figure 3 – Cisco 106.25Gbps/lane Silicon Photonics SISCAP modulator (a) First generation siscap configuration; (b) driving siscap; (c) measured pam4 53gbaud transmit waveform (106.25Gbps) from second generation siscap and driver

    CPO SYSTEM BENEFITS SUMMARY


    As a result of these innovations, the power required for connecting the Switch ASIC to front panel pluggable optics can be reduced by up to 50%, resulting in a total fixed system power reduction of up to 25-30%.

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials
    Figure 4 – 51.2T system power reduction from pluggable to CPO

    CISCO’S OFC DEMONSTRATION OF CO-PACKAGED OPTICS (CPO)


    At OFC 2023, Cisco is proud to demonstrate these next steps – a side-by-side comparison of the real power reduction between:

    ◉ Cisco 8111-32EH, a conventional 32-port 2x400G 1RU router fully populated with 2x400G-FR4 pluggable optics modules (64x400G FR4) based on the Cisco Silicon One G100 ASIC

    ◉ Cisco CPO router populated with a full complement of co-packaged Silicon Photonics-based optical tiles driving 64x400G FR4 also based on the Cisco Silicon One G100 ASIC with CPO substrate

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials
    Figure 5- Cisco’s OFC 2023 CPO Demo System (in 128x400G FR4 chassis configuration)

    KEY ADVANTAGES


    Cisco’s CPO demonstration at OFC 2023 highlights some of the key advantages of Cisco’s technology development.

    Integrated Silicon Photonics Mux/Demux for 400G FR4

    One of the challenges of co-packaging optics is the requirement to miniaturize the optical components to fit on an ASIC package (over 100x lower volume than a conventional QSFP-DD or OSFP module). This requires optics and packaging innovation.

    Any CPO architecture must provide the flexibility to support all data center optics types, including those using parallel single mode fiber, e.g. 4x100G DR4, and CWDM (coarse wave division multiplexing) e.g. 400G FR4.

    400G FR4 uses 4 different wavelengths of light on the same fiber, each carrying 100Gbps. This means 4 different wavelengths need to be combined together.  This is often done using an external lens, which takes up significant volume.

    Cisco has invented an innovative way to do this mux/demux on the Silicon Photonics IC, which we are demonstrating as part of the OFC demo.

    SP360: Service Provider, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials
    Figure 6 – 400G FR4 module block diagram (highlighting mux/demux)

    Multiple modules running at once

    Integration of optical tiles on the Switch ASIC package requires innovation in package mechanical design (to ensure mechanical reliability), power delivery (to deliver current to both the switch ASIC and the optical tiles in a small area), and thermal cooling (to remove the higher power density).

    Cisco’s demo has a full complement of optical tiles drawing their full power.

    Enhanced thermal design to permit conventional air cooling

    Another of the challenges of integrating optics with switch ASICs is while total system power decreases, the thermal density in the center of the system grows, because optics move from the front panel to the ASIC package.

    Other vendors use liquid cooling to manage this higher thermal density.

    Cisco has worked with key partners to develop advanced heat sink technologies, which allow conventional, reliable air cooling to continue to be used instead of forcing customers to change their infrastructure to support liquid cooling before they want to.

    CISCO IS THE RIGHT CHOICE FOR OF CO-PACKAGED OPTICS (CPO)


    Cisco recognizes the potential benefits of CPO technology and is investing to ensure we are ready for this inevitable transition.

    However, CPO poses extremely complex problems that system vendors and network operators must solve before significant deployments can begin.  For example, it must be reliable, serviceable, deployable, offer significant power savings, and be cost-effective.  This is the reason for our demo at OFC.  Cisco expects trial deployments coincident with the 51.2Tb switch cycle followed by larger scale adoption during the 101.2Tb switch cycle.

    We believe it’s not a matter of if co-packaged optics will occur, it is a matter of when. Cisco has expertise in systems, ASICs, and optics, which makes it one of the few companies that can successfully implement and deploy co-packaged optics in volume.  We remain dedicated to investing for this inevitable transition, but realistic that it may be still some ways away.

    Source: cisco.com