Thursday 30 March 2023

Failing Forward – What We Learned at Cisco from a “Failed” Digital Orchestration Pilot


You speak to a customer representative, and they tell you one thing.

You log into your digital account and see another.

You receive an email from the same company that tells an entirely different story.

At Cisco, we have been working to identify these friction points and evaluating how we can orchestrate a more seamless experience—transforming the customer, partner, and seller experience to be prescriptive, helpful – and, most importantly, simple. This is not an easy task when working in the complexity of environments, technologies, and client spaces that Cisco does business in, but it is not insurmountable.

We just closed out a year-long pilot of an industry-leading orchestration vendor, and by all measures – it failed. In The Lean Startup Eric Ries writes, “if you cannot fail, you cannot learn.” I fully subscribe to this perspective. If you are not willing to experiment, to try, to fail, and to evaluate your learnings, you only repeat what you know. You do not grow. You do not innovate. You need to be willing to dare to fail, and if you do, to try to fail forward.

So, while we did not renew the contract, we did continue down our orchestration journey equipped with a year’s worth of learnings and newly refined direction on how to tackle our initiatives.

Our Digital Orchestration Goals


We started our pilot with four key orchestration use cases:

1. Seamlessly connect prescriptive actions across channels to our sellers, partners, and customers.
2. Pause and resume a digital email journey based on triggers from other channels.
3. Connect analytics across the multichannel customer journey.
4. Easily integrate data science to branch and personalize the customer journey.

Let’s dive a bit deeper into each. We’ll look at the use case, the challenges we encountered, and the steps forward we are taking.

Use Case #1: Seamlessly connect prescriptive actions across channels to our sellers, partners, and customers.


Today we process and deliver business-defined prescriptive actions to our customer success representatives and partners when we have digitally identified adoption barriers in our customer’s deployment and usage of our SaaS products.

In our legacy state, we were executing a series of complex SQL queries in Salesforce Marketing Cloud’s Automation Studio to join multiple data sets and output the specific actions a customer needs. Then, using Marketing Cloud Connect, we wrote the output to the task object in Salesforce CRM to generate actions in a customer success agent’s queue. After this action is written to the task object, we picked up the log in Snowflake, applied additional filtering logic and wrote actions to our Cisco partner portal – Lifecycle Advantage, which is hosted on AWS.

There are several key issues with this workflow:

◉ Salesforce Marketing Cloud is not meant to be used as an ETL platform; we were already encountering time out issues.
◉ The partner actions were dependent on the seller processing, so it introduced complexity if we ever wanted to pause one workflow while maintaining the other.
◉ The development process was complex, and it was difficult to introduce new recommended actions or to layer on additional channels.
◉ There was no feedback loop between channels, so it was not possible for a customer success representative to see if a partner had taken action or not, and vice versa.

Thus, we brought in an orchestration platform – a place where we can connect multiple data sources through APIs, centralize processing logic, and write the output to activation channels. Pretty quickly in our implementation, though, we encountered challenges with the orchestration platform.

The Challenges

◉ The complexity of the joins in our queries could not be supported by the orchestration platform, so we had to preprocess the actions before they entered the platform and then they could be routed to their respective activation channels. This was our first pivot. In our technical analysis of the platform, the vendor assured us that our queries could be supported in the platform, but in actual practice, that proved woefully inaccurate. So, we migrated the most complex processing to Google Cloud Platform (GCP) and only left simple logic in the orchestration platform to identify which action a customer required and write that to the correct activation channel.
◉ The user interface abstracted parts of the code creating dependencies on an external vendor. We spent considerable time trying to decipher what went wrong via trial and error without access to proper logs.
◉ The connectors were highly specific and required vendor support to setup, modify, and troubleshoot.

Our Next Step Forward

These three challenges forced us to think differently. Our goal was to centralize processing logic and connect to data sources as well as activation channels. We were already leveraging GCP for preprocessing, so we migrated the remainder of the queries to GCP. In order to solve for our need to manage APIs to enable data consumption and channel activation, we turned to Mulesoft. The combination of GCP and Mulesoft helped us achieve our first orchestration goal while giving us full visibility to the end-to-end process for implementation and support.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Orchestration Architecture

Use Case #2: Pause and resume a digital email journey based on triggers from other channels.


We focused on attempting to pause an email journey in a Marketing Automation Platform (Salesforce Marketing Cloud or Eloqua) if a customer had a mid-to-high severity Technical Assistance Center (TAC) Case open for that product.

Again, we set out to do this using the orchestration platform. In this scenario, we needed to pause multiple digital journeys from a single set of processing logic in the platform.

The Challenge

We did determine that we could send the pause/resume trigger from the orchestration platform, but it required setting up a one-to-one match of journey canvases in the orchestration platform to journeys that we might want to pause in the marketing automation platform. The use of the orchestration platform actually introduced more complexity to the workflow than managing ourselves.

Our Next Step Forward

Again, we looked at the known challenge and the tools in our toolbox. We determined that if we set up the processing logic in GCP, we could evaluate all journeys from a single query and send the pause trigger to all relevant canvases in the marketing automation platform – a much more scalable structure to support.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Sample of Wait Until Event used in Journey Builder

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Wait Until API Configuration

Another strike against the platform, but another victory in forcing a new way of thinking about a problem and finding a solution we could support with our existing tech stack. We also expect the methodology we established to be leveraged for other types of decisioning such as journey prioritization, journey acceleration, or pausing a journey when an adoption barrier is identified and a recommended action intervention is initiated.

Use Case #3: Connect analytics across the multichannel customer journey.


We execute journeys across multiple channels. For instance, we may send a renewal notification email series, show a personalized renewal banner on Cisco.com for users of that company with an upcoming renewal, and enable a self-service renewal process on renew.cisco.com. We collect and analyze metrics for each channel, but it is difficult to show how a customer or account interacted with each digital entity across their entire experience.

Orchestration platforms offer analytics views that display Sankey diagrams so journey strategists can visually review how customers engage across channels to evaluate drop off points or particularly critical engagements for optimization opportunities.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Sample of a Sankey Diagram

The Challenge

◉ As we set out to do this, we learned the largest blocker to unifying this data is not really a challenge an orchestration platform innately solves just through executing the campaigns through their platform. The largest blocker is that each channel uses different identifiers for the customer. Email journeys use email address, web personalization uses cookies associated at an account level, and the e-commerce experience uses user ID login. The root of this issue is the lack of a unique identifier that can be threaded across channels.
◉ Additionally, we discovered that our analytics and metrics team had existing gaps in attribution reporting for sites behind SSO login, such as renew.cisco.com.
◉ Finally, since many teams at Cisco are driving web traffic to Cisco.com, we saw a large inconsistency with how different teams were tagging (and not tagging) their respective web campaigns. To be able to achieve a true view of the customer journey end to end, we would need to adopt a common language for tagging and tracking our campaigns across business units at Cisco.

Our Next Step Forward

Our team began the process to adopt the same tagging and tracking hierarchy and system that our marketing organization uses for their campaigns. This will allow our teams to bridge the gap between a customer’s pre-purchase and post-purchase journeys at Cisco—enabling a more cohesive customer experience.

Next, we needed to tackle the data threading. Here we identified what mapping tables existed (and where) to be able to map different campaign data to a single data hierarchy. For this particular example for renewals, we needed to tackle three different data hierarchies:

1. Party ID associated with a unique physical location for a customer who has purchased from Cisco
2. Web cookie ID
3. Cisco login ID

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Data mapping exercise for Customer Journey Analytics

With the introduction of consistent, cross Cisco-BU tracking IDs in our Cisco.com web data, we will map a Cisco login ID back to a web cookie ID to fill in some of the web attribution gaps we see on sites like renew.cisco.com after a user logs in with SSO.

Once we had established that level of data threading, we could develop our own Sankey diagrams using our existing Tableau platform for Customer Journey Analytics. Additionally, leveraging our existing tech stack helps limit the number of reporting platforms used to ensure better metrics consistency and easier maintenance.

Use Case #4: Easily integrate data science to branch and personalize the customer journey.


We wanted to explore how we can take the output of a data science model and pivot a journey to provide a more personalized, guided experience for that customer. For instance, let’s look at our customer’s renewal journey. Today, they receive a four-touchpoint journey reminding them to renew. Customers can also open a chat or have a representative call or email them for additional support. Ultimately, the journey is the same for a customer regardless of their likelihood to renew. We have, however, a churn risk model that could be leveraged to modify the experience based on high, medium, or low risk of churn.

So, if a customer with an upcoming renewal had a high risk of churn, we could trigger a prescriptive action to escalate to a human for engagement, and we could also personalize the email with a more urgent message for that user. Whereas a customer with a low risk for churn could have an upsell opportunity weaved into their notification or we could route the low-risk customers into advocacy campaigns.

The goals of this use case were primarily:

1. Leverage the output of a data science model to personalize the customer’s experience
2. Pivot experiences from digital to human escalation based on data triggers.
3. Provide context to help customer agents understand the opportunity and better engage the customer to drive the renewal.

The Challenge

This was actually a rather natural fit for an orchestration platform. The challenge we entered here was the data refresh timing. We needed to refresh the renewals data to be processed by the churn risk model and align that with the timing of the triggered email journeys. Our renewals data was refreshed at the beginning of every month, but we hold our sends until the end of the month to allow our partners some time to review and modify their customers’ data prior to sending. Our orchestration platform would only process new, incremental data and overwrite based on a pre-identified primary key (this allowed for better system processing to not just overwrite all data with every refresh).

To get around this issue, our vendor would create a brand new view of the table prior to our triggered send so that all data was newly processed (not just any new or updated records). Not only did this create a vendor dependency for our journeys, but it also introduced potential quality assurance issues by requiring a pre-launch update of our data table sources for our production journeys.

Our Next Step Forward

One question we kept asking ourselves as we struggled to make this use case work with the orchestration platform—were we overcomplicating things? The two orchestration platform outputs of our attrition model use case were to:

1. Customize the journey content for a user depending on their risk of attrition.
2. Create a human touchpoint in our digital renewal journey for those with a high attrition risk.

For number one, we could actually achieve that using dynamic content modules within SalesForce Marketing Cloud if we simply added a “risk of attrition” field to our renewals data extension and created dynamic content modules for low, medium, and high risk of attrition values. Done!

For number two, doesn’t that sound sort of familiar? It should! It’s the same problem we wanted to solve in our first use case for prescriptive calls to action. Because we already worked to create a new architecture for scaling our recommended actions across multiple channels and audiences, we could work to add a branch for an “attrition risk” alert to be sent to our Cisco Renewals Managers and partners based on our data science model. A feedback loop could even be added to collect data on why a customer may not choose to renew after this human connection is made.

Finding Success


At the end of our one-year pilot, we had been forced to think about the tactics to achieve our goals very differently. Yes, we had deemed the pilot a failure – but how do we fail forward? As we encountered each challenge, we took a step back and evaluated what we learned and how we could use that to achieve our goals.

Ultimately, we figured out new ways to leverage our existing systems to not only achieve our core goals but also enable us to have end-to -end visibility of our code so we can set up the processing, refreshes, and connections exactly how our business requires.

Now – we’re applying each of these learnings. We are rolling out our core use cases as capabilities in our existing architecture, building an orchestration inventory that can be leveraged across the company – a giant step towards success for us and for our customers’ experience. The outcome was not what we expected, but each step of the process helped propel us toward the right solutions.

Source: cisco.com

Tuesday 28 March 2023

Cisco Modeling Labs 2.5: Now with Resource Limiting

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

Whether you’re using a large virtual machine or beefy hardware server, running labs with a lot of nodes or labs with resource-hungry nodes in Cisco Modeling Labs (CML) can require a lot of memory/RAM and CPUs. But this can become especially problematic in a multi-user system—until now.

Cisco Modeling Labs offers a new feature called resource limiting, available now in CML 2.5 for Enterprise and Higher Education. Read on to learn more about resource limiting, how to set up resource limits in CML 2.5, and what you need to know as you configure the new feature on your CML server or cluster.

What is resource limiting in CML 2.5?


Resource limiting is one of the new features of the CML 2.5 release. The basic idea here is to limit the resources an individual user or group of users can consume with an administrative policy configured on the CML server or cluster. Since this feature only makes sense within a multi-user system, resource limiting is only available in CML Enterprise and CML for Higher Education. Obviously, there is no reason to have a single user restrict themselves.  

Resources on a CML deployment, defined

Prior to the introduction of resource limiting, a user could grab all resources on a CML deployment. And, as a result, other users were unable to launch their labs and nodes.

For context, resources in a CML deployment refer to: 

◉ Memory 
◉ CPU cores 
◉ Node licenses 
◉ External connectors 

The first three elements of this list are indeed resources with limited availability. The external connectors, however, can be restricted from a policy point of view. Even though external connectors are almost free in terms of memory and CPU cost, it can make sense to restrict their usage for different users/groups. 

How to configure CML resource limits


By default, no resource limits are present. An administrator can put resource limits in place by creating resource pools, which then are assigned to a user or group of users. 

Create and assign resource pools


You can manage resource pools by navigating to Tools → System Administration → Resource Pools.

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

From there, you can create and assign pools. The system differentiates between a template and an actual pool, which is always based on a template and has a user or multiple users connected to it. 

When assigning a template to a group of users, all users of this group will be in one of these two categories: 

◉ They’ll be assigned an individual pool cloned from the chosen template.  
◉ They’ll share the same pool cloned from the chosen template. 

The shared pool switch controls this assignment, as the following screenshot shows: 

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

When adding CML users to the resource pool (via the Next step button in the Add workflow), the administrator can choose which users (or groups of users) are assigned to the pool, as shown in the following screenshot: 

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

Create and define your template(s)


Resource pools are always based on a template. This also means that, at the very minimum, you must configure one underlying template (a base template) first. Templates allow us to automatically associate a new resource pool with a new user, whether they are manually created by an administrator or when they are created based on a new lightweight directory access protocol (LDAP) user login. 

Templates also allow you to quickly change a setting for all the pools inherited from a template. In addition, you can override values for individual pools.  That is, the values in the individual pool take precedence over the values defined in the individual pool’s template. 

When a pool has multiple users assigned, then all users share the resources configured in this pool. 

Limit access to external connectors 


External connectors provide outside connectivity. In shared environments with additional network interface cards (NICs), which connect to different outside networks, you might want to control which user or group has access to which outside networks. You can also achieve this by leveraging resource limiting.

A resource pool can define which external network configuration is allowed or denied. As shown in the following screen shot, the administrator can give users of this resource pool one of two options:  

◉ They can use no external connector at all (see: Block all). 
◉ They can decide which specific external connector configuration to use by selecting the appropriate one

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

In the absence of a specific external connector limit, users with this policy can select all existing external connectors. 

How to check resource usage 


The administrator, as well as individual users, can check the resource limit status. For administrators, the overall system state is shown. (For example, all existing resource pools, including their current usage.) The resource limit use is available via the Tools → Resource limits menu entry, as the following graphic shows: 

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

Here, the administrator sees that there are two pools and that node licenses are in use in the pool named Max50. In addition, the CPU and Memory usage of that pool also appears. However, since the usage is not limited, the bar appears in gray. The external connector and user column show the external connectors the pool is using and the users assigned to the pool, respectively. 

As for the users, their view appears in the following graphic (also via Tools → Resource limits): 

Cisco Career, Cisco Exam, Cisco Exam Prep, Cisco Exam Certification, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Materials

Node licenses are limited, 6 out of 50 or 12% are in use, and 13 CPU cores and 6.5GB of memory are in use. Both CPUs and RAM are not limited, indicated by the infinity symbol in the gauge.

NOTE: Resource limiting does not check for over-subscription. In other words: If the CML system has, for example, 32 CPUs, and the administrator puts a 64 CPU limit into a pool, then the system would not prevent this. It is up to the administrator to put reasonable limits in place! 

Consequently, if no limit is put in place, resources are obviously not infinite. 

Highlights and benefits of CML 2.5 resource limits


The new resource limiting feature of Cisco Modeling Labs 2.5 provides a granular way to ensure fair consumption of resources on a shared system. In addition, it allows better policy control and is also a useful way to quickly determine resource usage by users or groups of users via the Resource Limits status page.

Source: cisco.com

Monday 27 March 2023

Everything You Need to Know About Cisco 300-215: The Exam and Certification Guide

Are you planning to take the Cisco 300-215 exam to enhance your career prospects as a security engineer? If yes, then this article is for you. This article will cover all the essential information you need to know about the Cisco 300-215 exam, including the exam syllabus, preparation tips, and the certification process.

Cisco CyberOps Professional Certification, CyberOps Professional Mock Exam, CyberOps Professional Question Bank, CyberOps Professional, CyberOps Professional Sample Questions, 300-215 Questions, 300-215 Quiz, 300-215, Cisco 300-215 Question Bank, CBRFIR Exam Questions, Cisco CBRFIR Questions, Conducting Forensic Analysis and Incident Response Using Cisco Technologies for CyberOps, Cisco CBRFIR Certification, Cisco 300-215 Practice Test Free, CBRFIR Certification Questions and Answers, CBRFIR Certification Sample Questions

Know About Cisco 300-215 CBRFIR Exam

Cisco 300-215 CBRFIR exam, also known as Conducting Forensic Analysis and Incident Response Using Cisco Technologies for CyberOps, is designed to test the knowledge and skills of security engineers in conducting forensic analysis and incident response using Cisco technologies. The exam duration is 90 minutes and consists of 55-65 questions.


300-215 CBRFIR Exam Syllabus

The Cisco 300-215 CBRFIR exam syllabus is divided into six domains, each covering different topics related to forensic analysis and incident response. The domains are:

  1. Fundamentals- 20%

  2. Forensics Technique- 20%

  3. Incident Response Techniques- 30%

  4. Forensics Processes- 15%

  5. Incident Response Processes- 15%

Target Audience

The target audience for the Cisco 300-215 certification exam is security engineers who want to specialize in conducting forensic analysis and incident response using Cisco technologies. This certification is ideal for professionals protecting and securing organizational assets, including networks, endpoints, and data.


The certification is also suitable for professionals who want to enhance their knowledge and skills in forensic analysis and incident response, regardless of their current job title or industry. It can benefit professionals in various fields, including:

  • Cybersecurity: Cybersecurity professionals who want to specialize in conducting forensic analysis and incident response using Cisco technologies can benefit from earning the Cisco 300-215 certification. It demonstrates their expertise and enhances their credibility in the field.
  • IT Operations: IT operations professionals responsible for managing and securing IT infrastructure can benefit from earning the Cisco 300-215 certification. It gives them the necessary knowledge and skills to effectively detect and respond to security incidents.
  • Law Enforcement: Law enforcement professionals who are involved in investigating cybercrime can benefit from earning the Cisco 300-215 certification. It gives them the necessary knowledge and skills to conduct forensic analysis and incident response using Cisco technologies.
  • Compliance: Compliance professionals are responsible for ensuring that organizations that comply with regulatory requirements can benefit from earning the Cisco 300-215 certification. It provides them with the necessary knowledge and skills to conduct forensic analysis and incident response to meet regulatory requirements.

300-215 Certification Process

After passing the Cisco 300-215 exam, you will receive the Cisco Certified CyberOps Professional certification. This certification validates your knowledge and skills in conducting forensic analysis and incident response using Cisco technologies. You can use this certification to enhance your career prospects in cybersecurity and related fields.

Top 5 Cisco 300-215 CBRFIR Preparation Tips

Preparing for the Cisco 300-215 CBRFIR exam requires a comprehensive study plan and a structured approach. Here are some tips to help you prepare for the exam:


1. Study the Exam Syllabus:

The exam syllabus is your roadmap to success. Make sure to study each domain thoroughly and understand the topics covered.


2. Use Study Materials:

Cisco provides official study materials, including books, videos, and practice tests, to help you prepare for the exam. You can also use third-party study materials from reputable sources.


3. Practice, Practice, Practice:

Practice is essential to passing the exam. Use practice tests to assess your knowledge and identify areas of improvement.


4. Join Study Groups:

Studying groups can help you learn from other candidates and share your knowledge and experiences.


5. Latest 300-215 Questions:

300-215 exam questions are confidential and not publicly available. Candidates should study the exam objectives and topics thoroughly and keep up with the latest trends and technologies in forensic analysis and incident response to prepare effectively for the exam.

Cisco 300-215 CBRFIR Benefits

Cisco 300-215 certification is a valuable asset for security engineers who want to specialize in conducting forensic analysis and incident response using Cisco technologies. Here are some of the benefits of earning this certification:

  • Career Advancement: Cisco 300-215 certification is recognized by industry leaders and can help you advance your career in cybersecurity and related fields. It demonstrates your knowledge and skills in conducting forensic analysis and incident response using Cisco technologies, making you a valuable asset to any organization.
  • Competitive Edge: The cybersecurity industry is highly competitive, and earning Cisco 300-215 certification can give you a competitive edge over other candidates. It shows you have the necessary knowledge and skills to perform the job at a high level.
  • Enhanced Skills and Knowledge: Preparing for the Cisco 300-215 exam requires a comprehensive study plan and a structured approach. Studying for the exam will enhance your skills and knowledge in conducting forensic analysis and incident response using Cisco technologies.
  • Increased Earning Potential: According to PayScale, the average salary for a security engineer with Cisco Certified CyberOps Professional certification is around $106k annually. Earning this certification can increase your earning potential and lead to higher-paying job opportunities.
  • Professional Development: Cisco 300-215 certification is valuable to your professional portfolio and can help you stand out in the job market. It demonstrates your commitment to professional development and lifelong learning.

Cisco 300-215 Scope


The Cisco 300-215 CBRFIR exam covers various topics related to conducting forensic analysis and incident response using Cisco technologies. The exam syllabus is divided into six domains, each covering different areas of expertise. Here is an overview of the scope of the Cisco 300-215 exam:

  • Fundamentals of Forensic Analysis and Incident Response: This domain covers the basic concepts and principles of forensic analysis and incident response. It includes forensic investigation, evidence collection, and legal considerations.
  • Network Forensics and Traffic Analysis: This domain covers network-based forensic analysis and incident response. It includes network traffic analysis, protocol analysis, and intrusion detection and prevention.
  • Endpoint Forensics and Analysis: This domain covers endpoint-based forensic analysis and incident response. It includes malware analysis, memory forensics, and disk forensics.
  • Incident Response: This domain covers incident response procedures and methodologies. It includes incident detection and analysis, classification and prioritization, and incident response planning.
  • Incident Handling: This domain covers the practical aspects of incident handling. It includes containment, eradication, recovery, and communication and coordination with stakeholders.
  • Incident Response Teams: This domain covers the organization and management of incident response teams. It includes team roles and responsibilities, incident response plan development and maintenance, and incident response team training and exercises.

The Cisco 300-215 exam covers various topics related to forensic analysis and incident response using Cisco technologies. The domains cover the field's theoretical and practical aspects, making it a comprehensive certification for security engineers.


Conclusion

The Cisco 300-215 CBRFIR exam is an essential certification for security engineers who want to specialize in conducting forensic analysis and incident response using Cisco technologies. Following the tips and guidelines in this article, you can prepare for the exam and pass it with flying colors.

Good luck!

Saturday 25 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 2

AI Spoofing Detection Architecture and Deployment

Our previous blog post, Designing and Deploying Cisco AI Spoofing Detection, Part 1: From Device to Behavioral Model, introduced a hybrid cloud/on-premises service that detects spoofing attacks using behavioral traffic models of endpoints. In that post, we discussed the motivation and the need for this service and the scope of its operation. We then provided an overview of our Machine Learning development and maintenance process. This post will detail the global architecture of Cisco AISD, the mode of operation, and how IT incorporates the results into its security workflow.

Since Cisco AISD is a security product, minimizing detection delay is of significant importance. With that in mind, several infrastructure choices were designed into the service. Most Cisco AI Analytics services use Spark as a processing engine. However, in Cisco AISD, we use an AWS Lambda function instead of Spark because the warmup time of a Lambda function is typically shorter, enabling a quicker generation of results and, therefore a shorter detection delay. While this design choice reduces the computational capacity of the process, that has not been a problem thanks to a custom-made caching strategy that reduces processing to only new data on each Lambda execution.

Global AI Spoofing Detection Architecture Overview

Cisco AISD is deployed on a Cisco DNA Center network controller using a hybrid architecture of an on-premises controller tethered to a cloud service. The service consists of on-premises processes as well as cloud-based components.

The on-premises components on the Cisco DNA Center controller perform several vital functions. On the outbound data path, the service continually receives and processes raw data captured from network devices, anonymizes customer PII, and exports it to cloud processes over a secure channel. On the inbound data path, it receives any new endpoint spoofing alerts generated by the Machine Learning algorithms in the cloud, deanonymizes any relevant customer PII, and triggers any Changes of Authorization (CoA) via Cisco Identity Services Engine (ISE) on affected endpoints.

The cloud components perform several key functions focused primarily on processing the high volume data flowing from all on-premises deployments and running Machine Learning inference.  In particular, the evaluation and detection mechanism has three steps:

1. Apache Airflow is the underlying orchestrator and scheduler to initiate compute functions. An Airflow DAG frequently enqueues computation requests for each active customer to a queuing service.

2. As each computation request is dequeued, a corresponding serverless compute function is invoked. Using serverless functions enables us to control compute costs at scale. This is a highly efficient multi-step, compute-intensive, short-running function that performs an ETL step by reading raw anonymized customer data from data buckets and transforming them into a set of input feature vectors to be used for inference by our Machine Learning models for spoof detection. This compute function leverages some of cloud providers’ common Function as a Service architecture.

3. This function then also performs the model inference step on the feature vectors produced in the previous step, ultimately leading to the detection of spoofing attempts if they are present. If a spoof attempt is detected, the details of the finding are pushed to a database that is queried by the on-premises components of Cisco DNA Center and finally presented to administrators for action.

Figure 1: Schematic view of Cisco AISD cloud and on-premises components.

Figure 1 captures a high-level view of the Cisco AISD components. Two components, in particular, are central to the cloud inferencing functionality: the Scheduler and the serverless functions.

The Scheduler is an Airflow Directed Acyclic Graph (DAG) responsible for triggering the serverless function executions on active Cisco AISD customer data. The DAG runs at high-frequency intervals pushing events into a queue and triggering the inference function executions. The DAG executions prepare all the metadata for the compute function. This includes determining customers with active flows, grouping compute batches based on telemetry volume, optimizing the compute process, etc. The inferencing function performs ETL operations, model inference, detection, and storage of spoofing alerts if any. This compute-intensive process implements much of the intelligence for spoof detection. As our ML models get retrained regularly, this architecture enables the quick rollout—or rollback if needed—of updated models without any change or impact on the service.

The inference function executions have a stable average runtime of approximately 9 seconds, as shown in Figure 2, which, as stipulated in the design, does not introduce any significant delay in detecting spoofing attempts.

Figure 2: Average lambda execution time in milliseconds for all Cisco AISD active customers between Jan 23rd and Jan 30th

Cisco AI Spoofing Detection in Action


In this blog post series, we described the internal design principles and processes of the Cisco AI Spoofing Detection service. However, from a network operator’s point of view, all these internals are entirely transparent. To start using the hybrid on-premises/cloud-based spoofing detection system, Cisco DNA Center Admins need to enable the corresponding service and cloud data export in Cisco DNA Center System Settings for AI Analytics, as shown in Figure 3.

Figure 3: Enabling Cisco AI Spoofing Detection is very simple in Cisco DNA Center.

Once enabled, the on-prem component in the Cisco DNA Center starts to export relevant data to the cloud that hosts the spoof detection service. The cloud components automatically start the process for scheduling the model inference function runs, evaluating the ML spoofing detection models against incoming traffic, and raising alerts when spoofing attempts on a customer endpoint are detected. When the system detects spoofing, the Cisco DNA Center in the customer’s network receives an alert with information. An example of such a detection is shown in Figure 4. In the Cisco DNA Center console, the network operator can set options to execute pre-defined containment actions for the endpoints marked as spoofed: shut down the port, flap the port, or re-authenticate the port from memory.

Figure 4: Example of alert from an endpoint that was initially classified as a printer.

Protecting the Network from Spoofing Attacks with Cisco DNA Center


Cisco AI Spoofing Detection is one of the newest security benefits provided to Cisco DNA Center operators with a Cisco DNA Advantage license. To simplify managing complex networks, AI and ML capabilities are being woven throughout the Cisco network management ecosystem of controllers and network fabrics. Along with the new Cisco AISD, Cisco AI Network Analytics, Machine Reasoning Engine Workflows, Networking Chatbots, Group-Based Policy Analytics, and Trust Analytics are additional features that work together to simplify management and protect network endpoints.

Source: cisco.com

Tuesday 21 March 2023

Designing and Deploying Cisco AI Spoofing Detection – Part 1

The network faces new security threats every day. Adversaries are constantly evolving and using increasingly novel mechanisms to breach corporate networks and hold intellectual property hostage. Breaches and security incidents that make the headlines are usually preceded by considerable recceing by the perpetrators. During this phase, typically one or several compromised endpoints in the network are used to observe traffic patterns, discover services, determine connectivity, and gather information for further exploit.

Compromised endpoints are legitimately part of the network but are typically devices that do not have a healthy cycle of security patches, such as IoT controllers, printers, or custom-built hardware running custom firmware or an off-the-shelf operating system that has been stripped down to run on minimal hardware resources. From a security perspective, the challenge is to detect when a compromise of these devices has taken place, even if no malicious activity is in progress.

In the first part of this two-part blog series, we discuss some of the methods by which compromised endpoints can get access to restricted segments of the network and how Cisco AI Spoofing Detection is designed used to detect such endpoints by modeling and monitoring their behavior.

Part 1: From Device to Behavioral Model

One of the ways modern network access control systems allow endpoints into the network is by analyzing identity signatures generated by the endpoints. Unfortunately, a well-crafted identity signature generated from a compromised endpoint can effectively spoof the endpoint to elevate its privileges, allowing it access to previously unauthorized segments of the network and sensitive resources. This behavior can easily slip detection as it’s within the normal operating parameters of Network Access Control (NAC) systems and endpoint behavior. Generally, these identity signatures are captured through declarative probes that contain endpoint-specific parameters (e.g., OUI, CDP, HTTP, User-Agent). A combination of these probes is then used to associate an identity with endpoints.

Any probe that can be controlled (i.e., declared) by an endpoint is subject to being spoofed. Since, in some environments, the endpoint type is used to assign access rights and privileges, this type of spoofing attempt can lead to critical security risks. For example, if a compromised endpoint can be made to look like a printer by crafting the probes it generates, then it can get access to the printer network/VLAN with access to print servers that in turn could open the network to the endpoint via lateral movements.

There are three common ways in which an endpoint on the network can get privileged access to restricted segments of network:

1. MAC spoofing: an attacker impersonates a specific endpoint to obtain the same privileges.

2. Probe spoofing: an attacker forges specific packets to impersonate a given endpoint type.

3. Malware: a legitimate endpoint is infected with a virus, trojan, or other types of malware that allows an attacker to leverage the permissions of the endpoint to access restricted systems.

Cisco AI Spoofing Detection (AISD) focuses primarily on the detection of endpoints employing probe spoofing, most instances of MAC spoofing, and some cases of Malware infection. Contrary to the traditional rule-based systems for spoofing detection, Cisco AISD relies on behavioral models to detect endpoints that do not behave as the type of device they claim to be. These behavioral models are built and trained on anonymized data from hundreds of thousands of endpoints deployed in multiple customer networks. This Machine Learning-based, data-driven approach enables Cisco AISD to build models that capture the full gamut of behavior of many device types in various environments.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 1: Types of spoofing. AISD focuses primarily on probe spoofing and some instances of MAC spoofing.

Creating Benchmark Datasets


As with any AI-based approach, Cisco AISD relies on large volumes of data for a benchmark dataset to train behavioral models. Of course, as networks add endpoints, the benchmark dataset changes over time. New models are built iteratively using the latest datasets. Cisco AISD datasets for models come from two sources.

◉ Cisco AI Endpoint Analytics (AIEA) data lake. This data is sourced from Cisco DNA Center with Cisco AI Endpoint Analytics and Cisco Identity Services Engine (ISE) and stored in a cloud database. The AIEA data lake consists of a multitude of endpoint information from each customer network. Any personally identifiable information (PII) or other identifiers such as IP and MAC addresses—are encrypted at the source before it is sent to the cloud. This is a novel mechanism used by Cisco in a hybrid cloud tethered controller architecture, where the encryption keys are stored at each customer’s controller.
◉ Cisco AISD Attack data lake contains Cisco-generated data consisting of probe and MAC spoofing attack scenarios.

To create a benchmark dataset that captures endpoint behaviors under both normal and attack scenarios, data from both data lakes are mixed, combining NetFlow records and endpoint classifications (EPCL). We use the EPCL data lake to categorize the NetFlow records into flows per logical class. A logical class encompasses device types in terms of functionality, e.g., IP Phones, Printers, IP Cameras, etc. Data for each logical class are split into train, validation, and test sets. We use the train split for model training and the validation split for parameter tuning and model selection. We use test splits to evaluate the trained models and estimate their generalization capabilities to previously unseen data.

Benchmark datasets are versioned, tagged, and logged using Comet, a Machine Learning Operations (MLOps) and experiment tracking platform that Cisco development leverages for several AI/ML solutions. Benchmark Datasets are refreshed regularly to ensure that new models are trained and evaluated on the most recent variability in customers’ networks.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 2: Benchmark Dataset and Data Split Creation

Model Development and Monitoring


In the model development phase, we use the latest benchmark dataset to build behavioral models for logical classes. Customer sites use the trained models. All training and evaluation experiments are logged in Comet along with the hyper-parameters and produced models. This ensures experiment reproducibility and model traceability and enables audit and eventual governance of model creation. During the development phase, multiple Machine Learning scientists work on different model architectures, producing a set of results that are collectively compared in order to choose the best model. Then, for each logical class, the best models are versioned and added to a Model Registry. With all the experiments and models gathered in one location, we can easily compare the performance of the different models and monitor the evolution of the performance of released models per development phase.

The Model Registry is an integral part of our model deployment process. Inside the Model Registry, models are organized per logical class of devices and versioned, enabling us to keep track of the complete development cycle—from benchmark dataset used, hyper-parameters chosen, trained parameters, obtained results, and code used for training. The models are deployed in AWS (Amazon Web Services) where the inferencing takes place. We will discuss this process in our next blog post, so stay tuned.

Production models are closely monitored. If the performance of the models starts degrading—for example, they start generating too many false alerts—a new development phase is triggered. That means that we construct a new benchmark dataset with the latest customer data and re-train and test the models. In parallel, we also revisit the investigation of different model architectures.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco AI
Figure 3: Cisco AI Spoofing Detection Model Lifecycle

Next Up: Taking Behavioral Models to Production in Cisco AI Spoofing Detection


In this post, we’ve covered the initial design process for using AI to build device behavioral models using endpoint flow and classification data from customer networks. In part 2 “Taking Behavioral Models to Production in Cisco AI Spoofing Detection” we will describe the overall architecture and deployment of our models in the cloud for monitoring and detecting spoofing attempts.

Source: cisco.com

Monday 20 March 2023

Top 10 Tips to Pass CCNP Service Provider 350-501 SPCOR Exam

One of the most sought-after certifications in the field is the CCNP Service Provider. It demonstrates basic knowledge while allowing you to tailor the certification to the preferred technical field. This post will discuss the CCNP Service Provider 350-501 SPCOR exam. Your proficiency and expertise with service provider solutions are put to text during this certification exam.

Overview of Cisco 300-501 SPCOR Exam

To achieve CCNP Service Provider certification, you should pass two exams: a core exam and a concentration exam of your choice.

• The core exam, Implementing and Operating Cisco Service Provider Networks Core Technologies v1.0 (300-501 SPCOR ), highlights your knowledge of the core architecture, service provider infrastructure, networking, automation, services, quality of service, security, and network assurance included. This core exam is also a prerequisite for the CCIE Service Provider certification, and passing this Cisco exam helps you earn both certificates.

• The concentration exam focuses on the development and industry-specific topics, like VPN services, advanced routing, and automation.

The Implementing and Operating Cisco Service Provider Network Core Technologies v1.0 (SPCOR 350-501) exam is a 120-minute exam consisting of 90-110 questions. This exam is associated with the CCNP Service Provider, CCIE Service Provider, and Cisco Certified Specialist – Service Provider Core certifications.

The exam covers the following topics:

  • Core architecture
  • Services
  • Networking
  • Automation
  • Quality of services
  • Security
  • Network assurance
  • Proven Tips to Pass the CCNP Service Provider 350-501 SPCOR Exam

    1. Understand the Exam Objectives

    Understanding its objectives is the first and most crucial step toward passing any certification exam. The CCNP Service Provider 350-501 SPCOR Exam tests your knowledge of implementing, troubleshooting, and optimizing service provider VPN services. Therefore, it is essential to have a comprehensive understanding of the exam objectives, which can be found on the official Cisco website.

    2. Get Familiar with the Exam Format

    The CCNP Service Provider 350-501 SPCOR Exam consists of 90-110 questions you must answer within 120 minutes. The exam format includes multiple-choice, drag-and-drop, simulation, and testlet questions. Familiarizing yourself with the exam format will help you manage your time efficiently during the exam.

    3. Study the Exam Topics Thoroughly

    Once you clearly understand the exam objectives and format, it’s time to start studying. The official Cisco website provides a comprehensive list of exam topics you must study to pass. Cover all the topics thoroughly and practice hands-on exercises to reinforce your knowledge.

    Also Read: The Best & Ultimate Guide to Pass CCNP service provider 350-501 SPCOR Exam

    4. Use Official Cisco Study Materials

    The best way to prepare for the CCNP Service Provider 350-501 SPCOR Exam is to use official Cisco study materials. These materials are designed specifically for the exam and provide you with in-depth knowledge of the exam topics. You can also use third-party study materials, but ensure they cover all the exam topics.

    5. Join a Study Group

    Joining a study group is an excellent way to prepare for the CCNP Service Provider 350-501 SPCOR Exam. You can discuss exam topics with your peers, exchange ideas and insights, and get feedback on your progress. You can find study groups online or in your local community.

    6. Practice with Exam Simulators

    Exam simulators are an excellent way to prepare for the CCNP Service Provider 350-501 SPCOR Exam. These simulators simulate the exam environment, including the format and difficulty level of the questions. They also provide instant feedback on your performance, allowing you to identify your strengths and weaknesses.

    7. Take Practice Tests

    Taking practice tests is an essential part of exam preparation. Practice tests not only help you assess your knowledge of the exam topics but also help you get familiar with the exam format. You can find a wide range of practice tests online or in official Cisco study materials.

    8. Manage Your Time Effectively

    Managing your time effectively during the exam is crucial. Read each question carefully, understand what it asks, and allocate your time accordingly. Don’t spend too much time on difficult questions; move on to easier ones and return to the difficult ones later.

    9. Relax and Stay Focused

    It’s normal to feel nervous before and during the exam. However, it’s essential to stay calm and focused. Take deep breaths, clear your mind, and stay focused on the task at hand. Remember, you’ve prepared well, and you have the knowledge and skills required to pass the exam.

    10. Review Your Answers Carefully

    After you have answered all the 350-501 SPCOR exam questions, review all the answers. This is because candidates often get so enthusiastic about being done with an exam that they forget to go back and check their answers. It may seem redundant, but it’s important to double-check their answers. This helps to ensure that each question has been answered completely and thoroughly and that they haven’t made any simple mistakes. 

    Conclusion

    If you study thoroughly for your CCNP Service Provider 350-501 SPCOR exam and follow the tips in this article, you will succeed in this exam. Earning the CCNP Service Provider 350-501 is one of the best ways to advance your career as a network engineer, support engineer, and network technician. Therefore, you can prepare for the exam with the best resources and master the exam concepts, which will help you pass your exam on the first attempt.