Showing posts with label Analytics & Intelligent Automation. Show all posts
Showing posts with label Analytics & Intelligent Automation. Show all posts

Saturday 21 September 2024

Putting AI Into AIOps: A Future Beyond Dashboards

Putting AI Into AIOps: A Future Beyond Dashboards

In today’s fast-paced IT environment, traditional dashboards and reactive alert systems are quickly becoming outdated. The digital landscape requires a more proactive and intelligent approach to IT operations. Enter Artificial Intelligence (AI) in IT Operations (AIOps), a transformative approach that leverages AI to turn data into actionable insights, automated responses, and enabling self-healing systems. This shift isn’t just integrating AI into existing frameworks; it has the potential to fundamentally transform IT operations.

The Evolution of IT Operations: From Reactive to Proactive


Putting AI Into AIOps: A Future Beyond Dashboards
The traditional model of IT operations has long been centered around dashboards, manual interventions, and reactive processes. What once sufficed in simpler systems is now inadequate in today’s complex, interconnected environments. Today’s systems produce vast data of logs, metrics, events, and alerts, creating overwhelming noise that hides critical issues. It’s like searching for a whisper in a roaring crowd. The main challenge isn’t the lack of data, but the difficulty in extracting timely, actionable insights.

AIOps steps in by addressing this very challenge, offering a path to shift from reactive incident management to proactive operational intelligence. The introduction of a robust AIOps maturity model allows organizations to progress from basic automation and predictive analytics to advanced AI techniques, such as generative and multimodal AI. This evolution allows IT operations to become insight-driven, continuously improving, and ultimately self-sustaining. What if your car could not only drive itself and learn from every trip, but also only alert you when critical action was needed, cutting through the noise and allowing you to focus solely on the most important decisions?

Leveraging LLMs to Augment Operations


A key advancement in AIOps is the integration of Large Language Models (LLMs) to support IT teams. LLMs process and respond in natural language to enhance decision-making by offering troubleshooting suggestions, identifying root causes, and proposing next steps, seamlessly collaborating with the human operators.

When problems occur in IT operations, teams often lose crucial time manually sifting through logs, metrics, and alerts to diagnose the problem. It’s like searching for a needle in a haystack; we waste valuable time digging through endless data before we can even begin solving the real issue. With LLMs integrated into the AIOps platform, the system can instantly analyze large volumes of unstructured data, such as incident reports and historical logs, and suggest the most probable root causes. LLMs can quickly recommend the right service group for an issue using context and past incident data, speeding up ticket assignment and resulting in quicker user resolution.

LLMs can also offer recommended next steps for remediation based on best practices and past incidents, speeding up resolution and helping less experienced team members make informed decisions, boosting overall team competence. It’s like having a seasoned mentor by your side, guiding you with expert advice for every step. Even beginners can quickly solve problems with confidence, improving the whole team’s performance.

Revolutionizing Incident Management in Global Finance Use Case


In the global finance industry, seamless IT operations are essential for ensuring reliable and secure financial transactions. System downtimes or failures can lead to major financial losses, regulatory fines, and damaged customer trust. Traditionally, IT teams used a mix of monitoring tools and manual analysis to address issues, but this often causes delays, missed alerts, and a backlog of unresolved incidents. It’s like managing a train network with outdated signals as everything slows down to avoid mistakes, but delays still lead to costly problems. Similarly, traditional IT incident management in finance slows responses, risking system failures and trust.

IT Operations Challenge

A major global financial institution is struggling with frequent system outages and transaction delays. Its traditional operations model relies on multiple monitoring tools and dashboards, causing slow response times, a high Mean Time to Repair (MTTR), and an overwhelming number of false alerts that burden the operations team. The institution urgently needs a solution that can detect and diagnose issues more quickly while also predicting and preventing problems before they disrupt financial transactions.

AIOps Implementation

The institution implements an AIOps platform that consolidates data from multiple sources, such as transaction logs, network metrics, events, and configuration management databases (CMDBs). Using machine learning, the platform establishes a baseline for normal system behavior and applies advanced techniques like temporal proximity filtering and collaborative filtering to detect anomalies. These anomalies, which would typically be lost in the overwhelming data noise, are then correlated through association models to accurately identify the root causes of issues, streamlining the detection and diagnosis process.

Putting AI Into AIOps: A Future Beyond Dashboards
To enhance incident management, the AIOps platform integrates a Large Language Model (LLM) to strengthen the operations team’s capabilities. When a transaction delay occurs, the LLM quickly analyzes unstructured data from historical logs and recent incident reports to identify likely causes, such as a recent network configuration change or a database performance issue. Based on patterns from similar incidents, it determines which service group should take ownership, streamlining ticket assignment and accelerating issue resolution, ultimately reducing Mean Time to Repair (MTTR).

Results

  • Reduced MTTR and MTTA: The financial institution experiences a significant reduction in Mean Time to Repair (MTTR) and Mean Time to Acknowledge (MTTA), as issues are identified and addressed much faster with AIOps. The LLM-driven insights allow the operations team to bypass initial diagnostic steps, leading directly to effective resolutions.
  • Proactive Issue Prevention: By leveraging predictive analytics, the platform can forecast potential issues, allowing the institution to take preventive measures. For example, if a trend suggests a potential future system bottleneck, the platform can automatically reroute transactions or notify the operations team to perform preemptive maintenance.
  • Enhanced Workforce Efficiency: The integration of LLMs into the AIOps platform enhances the efficiency and decision-making capabilities of the operations team. By providing dynamic suggestions and troubleshooting steps, LLMs empower even the less experienced team members to handle complex incidents with confidence, improving the user experience.
  • Reduced Alert Fatigue: LLMs help filter out false positives and irrelevant alerts, reducing the burden of noise that overwhelms the operations team. By focusing attention on critical issues, the team can work more effectively without being bogged down by unnecessary alerts.
  • Improved Decision-Making: With access to data-driven insights and recommendations, the operations team can make more informed decisions. LLMs analyze vast amounts of data, drawing on historical patterns to offer guidance that would be difficult to obtain manually.
  • Scalability: As the financial institution grows, AIOps and LLMs scale seamlessly, handling increasing data volumes and complexity without sacrificing performance. This ensures that the platform remains effective as operations expand.

Moving Past Incident Management


The use case shows how AIOps, enhanced by LLMs, can revolutionize incident management in finance, but its potential applies across industries. With a strong maturity model, organizations can achieve excellence in monitoring, security, and compliance. Supervised learning optimizes anomaly detection and reduces false positives, while generative AI and LLMs analyze unstructured data, offering deeper insights and advanced automation.

By focusing on high-impact areas such as reducing resolution times and automating tasks, businesses can rapidly gain value from AIOps. The aim is to build a fully autonomous IT environment that self-heals, evolves, and adapts to new challenges in real time much like a car that not only drives itself but learns from each trip, optimizing performance and solving issues before they arise.

Conclusion

“Putting AI into AIOps” isn’t just a catchy phrase – it’s a call to action for the future of IT operations. In a world where the pace of change is relentless, merely keeping up or treading water isn’t enough; Organizations must leap ahead to become proactive. AIOps is the key, transforming vast data into actionable insights and moving beyond traditional dashboards.

This isn’t about minor improvements, it’s a fundamental shift. Imagine a world where issues are predicted and resolved before they cause disruption, where AI helps your team make smarter, faster decisions, and operational excellence becomes standard. The global finance example shows real benefits; reduced risks, lower costs, and a seamless user experience.

Those who embrace AI-driven AIOps will lead the way, redefining success in the digital era. The era of intelligent, AI-powered operations is here. Are you ready to lead the charge?

Source: cisco.com

Thursday 18 April 2024

The Journey: Quantum’s Yellow Brick Road

The Journey: Quantum’s Yellow Brick Road

The world of computing is undergoing a revolution with two powerful forces converging: Quantum Computing (QC) and Generative Artificial Intelligence (GenAI). While GenAI is generating excitement, it’s still finding its footing in real-world applications. Meanwhile, QC is rapidly maturing, offering solutions to complex problems in fields like drug discovery and material science.

This journey, however, isn’t without its challenges. Just like Dorothy and her companions in the Wizard of Oz, we face obstacles along the yellow brick road. This article aims to shed light on these challenges and illuminate a path forward.

From Bits to Qubits: A New Kind of Switch


Traditionally, computers relied on bits, simple switches that are either on (1) or off (0). Quantum computers, on the other hand, utilize qubits. These special switches can be 1, 0, or both at the same time (superposition). This unique property allows them to tackle problems that are impossible or incredibly difficult for traditional computers. Imagine simulating complex molecules for drug discovery or navigating intricate delivery routes – these are just a few examples of what QC excels at.

The Power and Peril of Quantum Supremacy


With great power comes great responsibility and potential danger. In 1994, Peter Shor developed a theoretical model that could break widely used public-key cryptography like RSA, the security system protecting our data. This method leverages the unique properties of qubits, namely superposition, entanglement, and interference, to crack encryption codes. While the exact timeframe is uncertain (estimates range from 3 to 10 years), some experts believe a powerful enough quantum computer could eventually compromise this system.

This vulnerability highlights the “Steal Now, Decrypt Later” (SNDL) strategy employed by some nation-states. They can potentially intercept and store encrypted data now, decrypting it later with a powerful quantum computer. Experts believe SNDL operates like a Man in the Middle attack, where attackers secretly intercept communication and potentially alter data flowing between two parties.

The Intersection of GenAI and Quantum: A Security Challenge


The security concerns extend to GenAI, as well. GenAI models are trained on massive datasets, often containing sensitive information like code, images, or medical records. Currently, this data is secured with RSA-2048 encryption, which could be vulnerable to future quantum computers.

The Yellow Brick Road to Secure Innovation


Imagine a world where GenAI accelerates drug discovery by rapidly simulating millions of potential molecules and interactions. This could revolutionize healthcare, leading to faster cures for life-threatening illnesses. However, the sensitive nature of this data requires the highest level of security. GenAI is our powerful ally, churning out potential drug candidates at an unprecedented rate. But we can’t share this critical data with colleagues or partners for fear of intellectual property theft while that data is being shared. Enter a revolutionary system that combines the power of GenAI with an encryption of Post-Quantum Cryptography (PQC) which is expected to be unbreakable. This “quantum-resistant” approach would allow researchers to collaborate globally, accelerating the path to groundbreaking discoveries.

Benefits

  • Faster Drug Discovery: GenAI acts as a powerful tool, rapidly analyzing vast chemical landscapes. It identifies potential drug candidates and minimizes potential side effects with unprecedented speed, leading to faster development of treatments.
  • Enhanced Collaboration: PQC encryption allows researchers to securely share sensitive data. This fosters global collaboration, accelerating innovation and bringing us closer to achieving medical breakthroughs.
  • Future-Proof Security: Dynamic encryption keys and PQC algorithms ensure the protection of valuable intellectual property from cyberattacks, even from future threats posed by quantum computers and advanced AI.
  • Foundational Cryptography: GenAI and Machine Learning (ML) will become the foundation of secure and adaptable communication systems, giving businesses and governments more control over their cryptography.
  • Zero-Trust Framework: The transition to the post-quantum world is creating a secure, adaptable, and identity-based communication network. This foundation paves the way for a more secure digital landscape.

Challenges

  • GenAI Maturity: While promising, GenAI models are still under development and can generate inaccurate or misleading results. Refining these models requires ongoing research and development to ensure accurate and reliable output.
  • PQC Integration: Integrating PQC algorithms into existing systems can be complex and requires careful planning and testing. This process demands expertise and a strategic approach. NIST is delivering standardized post-quantum algorithms (expected by summer 2024).
  • Standardization: As PQC technology is still evolving, standardization of algorithms and protocols is crucial for seamless adoption. This would ensure that everyone is using compatible systems.
  • Next-Generation Attacks: Previous cryptography standards didn’t require AI-powered defenses.  These new attacks will necessitate the use of AI in encryption and key management, creating an evolving landscape.
  • Orchestration: Cryptography is embedded in almost every electronic device. Managing this requires an orchestration platform that can efficiently manage, monitor, and update encryption across all endpoints.

The Journey Continues: Embrace the Opportunities

The path forward isn’t paved with yellow bricks, but with lines of code, cutting-edge algorithms, and unwavering collaboration. While the challenges may seem daunting, the potential rewards are truly transformative. Here’s how we can embrace the opportunities:

  • Investing in the Future: Continued research and development are crucial. Funding for GenAI development and PQC integration is essential to ensure the accuracy and efficiency of these technologies.
  • Building a Collaborative Ecosystem: Fostering collaboration between researchers, developers, and policymakers is vital. Open-source platforms and knowledge-sharing initiatives will accelerate progress and innovation.
  • Equipping the Workforce: Education and training programs are necessary to equip the workforce with the skills needed to harness the power of GenAI and PQC. This will ensure a smooth transition and maximize the potential of these technologies.
  • A Proactive Approach to Security: Implementing PQC algorithms before quantum supremacy arrives is vital. A proactive approach minimizes the risk of the “Steal Now, Decrypt Later” strategy and safeguards sensitive data.

The convergence of GenAI and QC is not just a technological revolution, it’s a human one. It’s about harnessing our collective ingenuity to solve some of humanity’s most pressing challenges. By embracing the journey, with all its complexities and possibilities, we can pave the way for a golden future that is healthier, more secure, and brimming with innovation.

Source: cisco.com

Tuesday 10 October 2023

Building a transparent Notification Center to Enable Customer Control

Personalization is critical to a guided customer experience. It helps build trust, foster relationships, and enables a deeper connection with customers.

At Cisco, we have been trying to help our customers along each step of their post-sale experience for nearly a decade. And as a key part of that experience, we want our customers to have more control over what communications they receive – a more intentional step towards the right message, right person, right time goal that we are all striving to achieve.

Before we could begin, we took a thorough inventory of what exactly the post-sale experience for customers today looked like.

Evaluating a disconnected customer experience


Over the years, we’ve built several programs where customers could sign up for various post-sale notifications to help guide them on their path to success – but they were fragmented and lacked transparency.

Cisco Certification, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Guides, Cisco Preparation
One of the customer pages from the legacy experience

For instance, a customer could access a link via an email where they could enroll or unenroll from a specific Cisco product architecture. There was no way to access the link again if the customer changed their mind after unenrolling. It also was not totally clear to the customer exactly what they were unenrolling from.

Similarly, a customer could enroll in a digital journey from a form on the main website, Cisco.com, but they could not see what else they were subscribed to. There were 6+ programs of this nature that evolved over the years – each designed to help provide the customer more control over their experience, but lacking a critical ingredient – transparency.

Thus, began an initiative to build a Notification Center that was flexible, centralized, and personalized just for what a customer was eligible to receive. One tool for a customer to rule their post-sale experience.

Rooted in research


We built the Notification Center collaboratively with our customer research and design team, evaluating all the different existing programs we had, we defined MVP parameters that would enable us to evolve the data model to support a more cohesive experience. We experimented with design, naming conventions, login experiences and more. Each piece of feedback helped our design team iterate and ultimately finalize the MVP requirements so our Orchestration & Notification team could build out the digital experience.

The research as well as consultation with Forrester served as the foundation and guiding principles as we went through the development process. These principles included:

  • Build an experience that fosters trust and respects customer privacy and choices​
  • Collect only data we can act on​ – do not collect unnecessary data
  • Design scalability and flexibility, between MVP to future platform​s
  • Design consistency ​
  • Configurable UI that can be personalized based off of customer eligibility for products and services
  • Flexible data model that can handle changing products and services
  • Strict adherence to Cisco data security and privacy standards

The new interface replaces two of our previous data collection customer experiences that were linked in our emails. Now customers have full access to:

  • View all subscriptions associated with their email
  • Activate/Inactivate subscriptions for Renewals, Services, and Adopt Emails at the Use Case or Solution level
  • Continue to nominate contacts for respective subscriptions
  • Provide feedback on the experience directly to the experience design team

This new system supports all of our critical integrations with Snowflake, Salesforce Marketing Cloud (SFMC), Cisco Single Sign On, and it can be integrated across other channels as well.

Implementation Changes


This new approach to subscription management not only transformed the front-end customer experience, but it also changed the granularity of data we were collecting. To enable it, we designed an entirely new back-end process to support the front-end application. We also had to make some significant changes to the data model and our custom activities in SFMC.

Cisco Certification, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Guides, Cisco Preparation
The new experience design

  • The Notification Center UI, built on an SFMC Cloud Page, is supported by a Python-based Flask API, acting as an intermediary connecting the front-end with the backend database.
  • We made the strategic decision to use PostgreSQL as our backend database, hosted on Google Cloud Platform’s Cloud SQL instance, to replace SFMC’s native Data Extension for storing customer choices and Custom Activity log data. We chose this because of the advanced data capabilities, indexing options, ACID compliance for data integrity, trigger support, and scalability.
  • The database shift significantly reduced our reliance on SFMC as a database. This change decreased the overall number of SFMC API calls from 18 to 13 and increased the Custom Activity processing efficiency from 52 to 70 requests per second while concurrently reducing latency from 60 seconds to approximately 13 seconds.
  • Digital journeys executed through SFMC previously had Cisco product architecture level entry criteria, meaning customers qualified for journeys if they bought a particular product. With the introduction of Notification Center data, we are mapping at the use case level, so we can build our journey segments based on the particular reason a customer bought a product. This transition has increased the granularity of our data while enabling a more personalized customer experience.
  • Additionally, we enabled a daily sync between the Notification Center customer database and Enterprise Use Case Eligibility data to ensure Notification Center UI displays content in accordance with each customer’s eligibility criteria for a specific use case.

Source: cisco.com

Thursday 30 March 2023

Failing Forward – What We Learned at Cisco from a “Failed” Digital Orchestration Pilot


You speak to a customer representative, and they tell you one thing.

You log into your digital account and see another.

You receive an email from the same company that tells an entirely different story.

At Cisco, we have been working to identify these friction points and evaluating how we can orchestrate a more seamless experience—transforming the customer, partner, and seller experience to be prescriptive, helpful – and, most importantly, simple. This is not an easy task when working in the complexity of environments, technologies, and client spaces that Cisco does business in, but it is not insurmountable.

We just closed out a year-long pilot of an industry-leading orchestration vendor, and by all measures – it failed. In The Lean Startup Eric Ries writes, “if you cannot fail, you cannot learn.” I fully subscribe to this perspective. If you are not willing to experiment, to try, to fail, and to evaluate your learnings, you only repeat what you know. You do not grow. You do not innovate. You need to be willing to dare to fail, and if you do, to try to fail forward.

So, while we did not renew the contract, we did continue down our orchestration journey equipped with a year’s worth of learnings and newly refined direction on how to tackle our initiatives.

Our Digital Orchestration Goals


We started our pilot with four key orchestration use cases:

1. Seamlessly connect prescriptive actions across channels to our sellers, partners, and customers.
2. Pause and resume a digital email journey based on triggers from other channels.
3. Connect analytics across the multichannel customer journey.
4. Easily integrate data science to branch and personalize the customer journey.

Let’s dive a bit deeper into each. We’ll look at the use case, the challenges we encountered, and the steps forward we are taking.

Use Case #1: Seamlessly connect prescriptive actions across channels to our sellers, partners, and customers.


Today we process and deliver business-defined prescriptive actions to our customer success representatives and partners when we have digitally identified adoption barriers in our customer’s deployment and usage of our SaaS products.

In our legacy state, we were executing a series of complex SQL queries in Salesforce Marketing Cloud’s Automation Studio to join multiple data sets and output the specific actions a customer needs. Then, using Marketing Cloud Connect, we wrote the output to the task object in Salesforce CRM to generate actions in a customer success agent’s queue. After this action is written to the task object, we picked up the log in Snowflake, applied additional filtering logic and wrote actions to our Cisco partner portal – Lifecycle Advantage, which is hosted on AWS.

There are several key issues with this workflow:

◉ Salesforce Marketing Cloud is not meant to be used as an ETL platform; we were already encountering time out issues.
◉ The partner actions were dependent on the seller processing, so it introduced complexity if we ever wanted to pause one workflow while maintaining the other.
◉ The development process was complex, and it was difficult to introduce new recommended actions or to layer on additional channels.
◉ There was no feedback loop between channels, so it was not possible for a customer success representative to see if a partner had taken action or not, and vice versa.

Thus, we brought in an orchestration platform – a place where we can connect multiple data sources through APIs, centralize processing logic, and write the output to activation channels. Pretty quickly in our implementation, though, we encountered challenges with the orchestration platform.

The Challenges

◉ The complexity of the joins in our queries could not be supported by the orchestration platform, so we had to preprocess the actions before they entered the platform and then they could be routed to their respective activation channels. This was our first pivot. In our technical analysis of the platform, the vendor assured us that our queries could be supported in the platform, but in actual practice, that proved woefully inaccurate. So, we migrated the most complex processing to Google Cloud Platform (GCP) and only left simple logic in the orchestration platform to identify which action a customer required and write that to the correct activation channel.
◉ The user interface abstracted parts of the code creating dependencies on an external vendor. We spent considerable time trying to decipher what went wrong via trial and error without access to proper logs.
◉ The connectors were highly specific and required vendor support to setup, modify, and troubleshoot.

Our Next Step Forward

These three challenges forced us to think differently. Our goal was to centralize processing logic and connect to data sources as well as activation channels. We were already leveraging GCP for preprocessing, so we migrated the remainder of the queries to GCP. In order to solve for our need to manage APIs to enable data consumption and channel activation, we turned to Mulesoft. The combination of GCP and Mulesoft helped us achieve our first orchestration goal while giving us full visibility to the end-to-end process for implementation and support.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Orchestration Architecture

Use Case #2: Pause and resume a digital email journey based on triggers from other channels.


We focused on attempting to pause an email journey in a Marketing Automation Platform (Salesforce Marketing Cloud or Eloqua) if a customer had a mid-to-high severity Technical Assistance Center (TAC) Case open for that product.

Again, we set out to do this using the orchestration platform. In this scenario, we needed to pause multiple digital journeys from a single set of processing logic in the platform.

The Challenge

We did determine that we could send the pause/resume trigger from the orchestration platform, but it required setting up a one-to-one match of journey canvases in the orchestration platform to journeys that we might want to pause in the marketing automation platform. The use of the orchestration platform actually introduced more complexity to the workflow than managing ourselves.

Our Next Step Forward

Again, we looked at the known challenge and the tools in our toolbox. We determined that if we set up the processing logic in GCP, we could evaluate all journeys from a single query and send the pause trigger to all relevant canvases in the marketing automation platform – a much more scalable structure to support.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Sample of Wait Until Event used in Journey Builder

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Wait Until API Configuration

Another strike against the platform, but another victory in forcing a new way of thinking about a problem and finding a solution we could support with our existing tech stack. We also expect the methodology we established to be leveraged for other types of decisioning such as journey prioritization, journey acceleration, or pausing a journey when an adoption barrier is identified and a recommended action intervention is initiated.

Use Case #3: Connect analytics across the multichannel customer journey.


We execute journeys across multiple channels. For instance, we may send a renewal notification email series, show a personalized renewal banner on Cisco.com for users of that company with an upcoming renewal, and enable a self-service renewal process on renew.cisco.com. We collect and analyze metrics for each channel, but it is difficult to show how a customer or account interacted with each digital entity across their entire experience.

Orchestration platforms offer analytics views that display Sankey diagrams so journey strategists can visually review how customers engage across channels to evaluate drop off points or particularly critical engagements for optimization opportunities.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Sample of a Sankey Diagram

The Challenge

◉ As we set out to do this, we learned the largest blocker to unifying this data is not really a challenge an orchestration platform innately solves just through executing the campaigns through their platform. The largest blocker is that each channel uses different identifiers for the customer. Email journeys use email address, web personalization uses cookies associated at an account level, and the e-commerce experience uses user ID login. The root of this issue is the lack of a unique identifier that can be threaded across channels.
◉ Additionally, we discovered that our analytics and metrics team had existing gaps in attribution reporting for sites behind SSO login, such as renew.cisco.com.
◉ Finally, since many teams at Cisco are driving web traffic to Cisco.com, we saw a large inconsistency with how different teams were tagging (and not tagging) their respective web campaigns. To be able to achieve a true view of the customer journey end to end, we would need to adopt a common language for tagging and tracking our campaigns across business units at Cisco.

Our Next Step Forward

Our team began the process to adopt the same tagging and tracking hierarchy and system that our marketing organization uses for their campaigns. This will allow our teams to bridge the gap between a customer’s pre-purchase and post-purchase journeys at Cisco—enabling a more cohesive customer experience.

Next, we needed to tackle the data threading. Here we identified what mapping tables existed (and where) to be able to map different campaign data to a single data hierarchy. For this particular example for renewals, we needed to tackle three different data hierarchies:

1. Party ID associated with a unique physical location for a customer who has purchased from Cisco
2. Web cookie ID
3. Cisco login ID

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning
Data mapping exercise for Customer Journey Analytics

With the introduction of consistent, cross Cisco-BU tracking IDs in our Cisco.com web data, we will map a Cisco login ID back to a web cookie ID to fill in some of the web attribution gaps we see on sites like renew.cisco.com after a user logs in with SSO.

Once we had established that level of data threading, we could develop our own Sankey diagrams using our existing Tableau platform for Customer Journey Analytics. Additionally, leveraging our existing tech stack helps limit the number of reporting platforms used to ensure better metrics consistency and easier maintenance.

Use Case #4: Easily integrate data science to branch and personalize the customer journey.


We wanted to explore how we can take the output of a data science model and pivot a journey to provide a more personalized, guided experience for that customer. For instance, let’s look at our customer’s renewal journey. Today, they receive a four-touchpoint journey reminding them to renew. Customers can also open a chat or have a representative call or email them for additional support. Ultimately, the journey is the same for a customer regardless of their likelihood to renew. We have, however, a churn risk model that could be leveraged to modify the experience based on high, medium, or low risk of churn.

So, if a customer with an upcoming renewal had a high risk of churn, we could trigger a prescriptive action to escalate to a human for engagement, and we could also personalize the email with a more urgent message for that user. Whereas a customer with a low risk for churn could have an upsell opportunity weaved into their notification or we could route the low-risk customers into advocacy campaigns.

The goals of this use case were primarily:

1. Leverage the output of a data science model to personalize the customer’s experience
2. Pivot experiences from digital to human escalation based on data triggers.
3. Provide context to help customer agents understand the opportunity and better engage the customer to drive the renewal.

The Challenge

This was actually a rather natural fit for an orchestration platform. The challenge we entered here was the data refresh timing. We needed to refresh the renewals data to be processed by the churn risk model and align that with the timing of the triggered email journeys. Our renewals data was refreshed at the beginning of every month, but we hold our sends until the end of the month to allow our partners some time to review and modify their customers’ data prior to sending. Our orchestration platform would only process new, incremental data and overwrite based on a pre-identified primary key (this allowed for better system processing to not just overwrite all data with every refresh).

To get around this issue, our vendor would create a brand new view of the table prior to our triggered send so that all data was newly processed (not just any new or updated records). Not only did this create a vendor dependency for our journeys, but it also introduced potential quality assurance issues by requiring a pre-launch update of our data table sources for our production journeys.

Our Next Step Forward

One question we kept asking ourselves as we struggled to make this use case work with the orchestration platform—were we overcomplicating things? The two orchestration platform outputs of our attrition model use case were to:

1. Customize the journey content for a user depending on their risk of attrition.
2. Create a human touchpoint in our digital renewal journey for those with a high attrition risk.

For number one, we could actually achieve that using dynamic content modules within SalesForce Marketing Cloud if we simply added a “risk of attrition” field to our renewals data extension and created dynamic content modules for low, medium, and high risk of attrition values. Done!

For number two, doesn’t that sound sort of familiar? It should! It’s the same problem we wanted to solve in our first use case for prescriptive calls to action. Because we already worked to create a new architecture for scaling our recommended actions across multiple channels and audiences, we could work to add a branch for an “attrition risk” alert to be sent to our Cisco Renewals Managers and partners based on our data science model. A feedback loop could even be added to collect data on why a customer may not choose to renew after this human connection is made.

Finding Success


At the end of our one-year pilot, we had been forced to think about the tactics to achieve our goals very differently. Yes, we had deemed the pilot a failure – but how do we fail forward? As we encountered each challenge, we took a step back and evaluated what we learned and how we could use that to achieve our goals.

Ultimately, we figured out new ways to leverage our existing systems to not only achieve our core goals but also enable us to have end-to -end visibility of our code so we can set up the processing, refreshes, and connections exactly how our business requires.

Now – we’re applying each of these learnings. We are rolling out our core use cases as capabilities in our existing architecture, building an orchestration inventory that can be leveraged across the company – a giant step towards success for us and for our customers’ experience. The outcome was not what we expected, but each step of the process helped propel us toward the right solutions.

Source: cisco.com

Monday 8 August 2022

Operationalizing Objectives to Outcomes

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Preparation

As part of our digital transformation, my Cisco colleagues and I were getting trained on business agility in our ONEx organization. Any transformation needs an effective way to measure the success at the end and throughout, and as part of our initiative, I could see there was enough awareness and emphasis given to metrics and measurements.

The training also addressed some points from the book “Measure What Matters,” which peaked my curiosity and inspired me to start reading it. It is a fantastic book with the origin of the Objectives and Key Results (OKR) concept and how companies have leveraged the framework. I wanted to share a bit here about how Cisco also embraces this framework – and more – in our organization, in a slightly customized and enhanced way, and how it can be extended further.

Finding Middle Ground between Vision, Strategy, and Execution

Although the OKR framework has generated more interest in recent decades, goals and metrics themselves have long been the foundation to any company to identify, set and succeed. As with technology, our approach to goals and metrics has also evolved over the time, namely to include a couple key concepts: MBO or Management by Objectives, and VSE or Vision, Strategy & Execution, extension of this, VSEM, to include Metrics.

Vision

The Vision has represented the true north-star of what the company wants to achieve. If we time box it, perhaps, 3 to 5 years or beyond, Vision does not change often unless the company goes through a major transformation or change of business. However, at an organization level or function level, it could change a bit but still align to the overall company vision. And, as you can imagine, there is still a healthy internal debate about whether one should have ONE single vision for all or a vision at each lower of functional levels – and different companies handle it in different ways.

Strategy

While Vision is a starting point, we need other elements to take it further. Strategy is the next level of Vision – how you plan to accomplish the vision. This could be multiple levers (or initiatives or methods or ways) to achieve the vision: A strategy, approach, or means to plan for the execution of it and, finally, deliver the desired outcome or results.

Execution

If Vision is the desired outcome, and Strategy is the big plan, then Execution is the detailed plan. The key to Execution is measurement, and thus it is often broken into smaller chunks – goals or objectives – which are easier to accomplish and show progress.

Finding Meaningful Measurements

In the process of transforming our operations I’ve found several things to be true, and helpful, during this endeavor:

1. As Peter Drucker said, “What cannot be measured, cannot be improved“, but even before improving upon a thing, identifying and establishing the right set of metrics is key for any goal. Drucker also observed, “A manager should be able to measure the performance and results against a goal.” However, truly effective organizations must not limit measurements to the management level, but instead, equip employees at every level to identify and track meaningful metrics. These metrics could be milestones or KPIs and can be annual, quarterly, or even monthly. Some of these metrics could be in multiple systems (say ERP or CRM or ITSM) or Project Portfolio Management tools. The goals and objectives can be (and in some cases should be) inherited either vertically or across the organization or cross-functionally beyond the organization for shared goals.

2. When employing new measurement metrics within a company, the ideal scenario would be to integrate, automate, and bring all of these metrics into one single dashboard. A one-stop shop for metrics viewing simplifies the process, ensuring that there is minimal manual work involved in updating these metrics periodically. Several of the SaaS solutions provide APIs that can be used to easily integrate and get the needed metric and based on a set threshold, can even provide indicators about whether metrics have been achieved, and communicate that critical information in real-time to impacted teams.

3. Although Goals & Results could be separately reviewed from employee performance review discussions, the ideal would be to review them together.

4. WHAT was achieved should be equally evaluated with HOW it was achieved. Equally important to the Vision are the types of behaviors that were exhibited to accomplish these results, and they should be reviewed to ensure that we understand and agree with the methods and the values represented in the achievement.

5. It’s critical that metrics and measurements are looked at holistically and together. Operationalization of the entire framework, process, or activity makes it efficient for the organization, but defining and setting meaningful metrics cannot be a one-time activity. Putting a structure and defining these annually is a good start but this is just the beginning – goals need to be measured, reviewed, revisited, and adjusted as needed.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Preparation

Operationalization of the OKR framework can include various elements:

1. Conducting reviews at Initiative, Program, and Project level – leveraging metrics from the Portfolio Management and other IT Systems/Tools

2. Organizational health metrics from various sources

3. Ongoing operational reviews (RtB or Run your Business) – both IT (ideal to do weekly, monthly, and quarterly) and Business Reviews (ideal is Quarterly)

Among all of these observations I’ve made through this process, one of the most critical ones is that the information about meaningful metrics cannot be created and kept safe somewhere secretly. Instead, it needs to be published centrally, so that anyone can check on the goals of their colleagues and leaders at any point in time. This not only brings transparency and trust but also avoids duplication when found.

We are still in the process of creating a more mature, sophisticated practice around our internal OKRs, and in parallel, my colleagues across Cisco are also applying metrics to inform smarter, more efficient operations within our customer organizations.

For those who want to dig into the topic even more deeply, click here to learn more about how Cisco’s IoT practice is using metrics as a powerful tool in our customers’ digital transformation.

On that note, how is your team doing it? What can you share about what it takes to set and achieve measurable goals in your organization’s digital transformation? 

Source: cisco.com

Thursday 24 March 2022

Why Automation Will Unlock The Power of AI in Networking (Part 1)

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

You have probably heard about the old adage “Correlation does not imply causation”. This idea that one cannot deduce a causal relationship between two events merely because they occur in association has a cool latin name: cum hoc ergo propter hoc (“with this, therefore because of this”), which hints at the fact that this adage is even older than you might think.

What most people don’t know is that all the cool deep learning algorithms out there actually fall prey to this fallacy. No matter how fancy they are, these algorithms merely rely on association, but they have no common sense (which can be thought of as some kind of causal model of the world).

In this article, we will explore a few key ideas around the topics of correlation and causality, and more importantly, why you should care about this and how automation can help us in this regard!

Correlation by chance

If you have an interest in data analytics or statistics, you have probably come across the concept of spurious correlations. This term has been coined by the famous statistician Karl Pearson in the late 19th century, but has been recently popularized by the Spurious Correlations website (and book) by Tyler Vigen, which offers many examples such as this one:

Here we observe that the number of non-commercial space launches in the world happens to match almost perfectly the number of sociology doctorates awarded in the US every year (in terms of relative variation, not in absolute value). These examples are of course meant as jokes, and this makes us laugh because it goes against common sense. There isn’t any connection between space launches and sociology doctorates, so it is pretty clear that something is wrong here.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

Now, examples such as this one are not exactly what Karl Pearson had in mind when he coined the term, because they are the result of chance rather than a common cause. Instead, we are dealing with a problem of statistical significance: although the correlation coefficient is nearly 79%, this is based only on 13 data points for each series, which makes the possibility of correlation by chance very real. Actually, statisticians have designed tools to compute the probability that two completely independent processes (such as space launches and sociology doctorates) produce data that have a correlation at least as extreme as a given value: statistical testing (in which case this probability is called a p-value). 

I applied a statistical test for the above example (see this notebook if you want to test it yourself and see other examples), and I obtained a p-value of 0.13%. I also tested this result empirically by generating one million random time-series and counting how many such time-series had a correlation with the number of worldwide non-commercial space launches higher than 78.9%. No surprises here, I get roughly 0.13% of my trials falling in that category. This summarized in this figure:

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

One important lesson here is: by searching long enough in a large dataset, you will always find some examples of nicely correlated examples. By no means you should conclude that there is some actual relation between them, let alone some causation!

Correlation due to common causes


Now, you can be in a situation where not only the correlation is high, but the sample count is also high, and statistical testing will be of no help (that is, in the above example, you would never be able to generate a random time-series more correlated than your real data). Yet, you cannot conclude that you are in presence of a real situation of causation!

To illustrate this fact vividly, consider the following (made up) example featuring two processes: process A generates a time-series and process B generates discrete events. A realization of these processes is shown below:

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

We observe a systematic build up of time-series A, followed by an event B. For the sake of the illustration, let us assume that we have a very large dataset of such time-series and event data, and they all look pretty much like my diagram. The above example has a correlation of 27.62% and an infinitesimal p-value, which rules out correlation by chance. The build up of A happens prior to the event B, so it seems clear that it is a cause of B, right?

But what if I told you that A represents the number of people observed on a platform in a train station and that B corresponds to the arrival of a train on this platform? Then it all makes sense of course. Passengers accumulate on the platform, the train arrives, and most passengers hop on the train. Does that mean that the passengers cause the train to arrive? Of course not! These processes do not cause each other, but they share a common cause: the timetable!

Source: cisco.com