Tuesday 31 October 2023

How to Begin Observability at the Data Source

More data does not mean better observability


If you’re familiar with observability, you know most teams have a “data problem.” That is, observability data has exploded as teams have modernized their application stacks and embraced microservices architectures.

If you had unlimited storage, it’d be feasible to ingest all your metrics, events, logs, and traces (MELT data) in a centralized observability platform. However, that is simply not the case. Instead, teams index large volumes of data – some portions being regularly used and others not. Then, teams have to decide whether datasets are worth keeping or should be discarded altogether.

For the past few months I’ve been playing with a tool called Edge Delta to see how it might help IT and DevOps teams to solve this problem by providing a new way to collect, transform, and route your data before it is indexed in a downstream platform, like AppDynamics or Cisco Full-Stack Observability.

What is Edge Delta?


You can use Edge Delta to create observability pipelines or analyze your data from their backend. Typically, observability starts by shipping all your raw data to central service before you begin analysis. In essence, Edge Delta helps you flip this model on its head. Said another way, Edge Delta analyzes your data as it’s created at the source. From there, you can create observability pipelines that route processed data and lightweight analytics to your observability platform.

Why might this approach be advantageous? Today, teams don’t have a ton of clarity into their data before it’s ingested in an observability platform. Nor do they have control over how that data is treated or flexibility over where the data lives.

By pushing data processing upstream, Edge Delta enables a new kind of architecture where teams can have…

◉ Transparency into their data: “How valuable is this dataset, and how do we use it?”
◉ Controls to drive usability: “What is the ideal shape of that data?”
◉ Flexibility to route processed data anywhere: “Do we need this data in our observability platform for real-time analysis, or archive storage for compliance?”

The net benefit here is that you’re allocating your resources towards the right data in its optimal shape and location based on your use case.

How I used Edge Delta


Over the past few weeks, I’ve explored a couple different use cases with Edge Delta.

Analyzing NGINX log data from the Edge Delta interface

First, I wanted to use the Edge Delta console to analyze my log data. To do so, deployed the Edge Delta agent on a Kubernetes cluster running NGINX. From here, I sent both valid and invalid http requests to generate log data and observed the output via Edge Delta’s pre-built dashboards.

Among the most useful screens was “Patterns.” This feature clusters together repetitive loglines, so I can easily interpret each unique log message, understand how frequently it occurs, and whether I should investigate it further.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Tutorial and Materials
Edge Delta’s Patterns feature makes it easy to interpret data by clustering together repetitive log messages and provides analytics around each event.

Creating pipelines with Syslog data

Second, I wanted to manipulate data in flight using Edge Delta observability pipelines. Here, I installed the Edge Delta agent on my Mac OS. Then I exported Syslog data from my Cisco ISR1100 to my Mac.

From within the Edge Delta interface, I configured the agent to listen on the appropriate TCP and UDP ports. Now, I can apply processor nodes to transform (and otherwise manipulate) my data before it hits my downstream analytics platform.

Specifically, I applied the following processors:

◉ Mask node to obfuscate sensitive data. Here, I replaced social security numbers in my log data with the string ‘REDACTED’.
◉ Regex filter node which passes along or discards data based on the regex pattern. For this example, I wanted to exclude DEBUG level logs from downstream storage.
◉ Log to metric node for extracting metrics from my log data. The metrics can be ingested downstream in lieu of raw data to support real-time monitoring use cases. I captured metrics to track the rate of errors, exceptions, and negative sentiment logs.
◉ Log to pattern node which I alluded to in the section above. This creates “patterns” from my data by grouping together similar loglines for easier interpretation and less noise.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Tutorial and Materials
Through Edge Delta’s Pipelines interface, you can apply processors to your data and route it to different destinations.

For now all of this is being routed to the Edge Delta backend. However, Edge Delta is vendor-agnostic and I can route processed data to different destinations – like AppDynamics or Cisco Full-Stack Observability – in a matter of clicks.

Source: cisco.com

Saturday 28 October 2023

SD WAN solutions for utility Distribution Automation

Networks are expanding outside traditional office buildings and into industrial fixed and mobile use cases. This results in more devices being connected to the Internet and data centers as well as increased security exposure. IoT has moved traditional networking far beyond the carpeted spaces and into industries like Fleets, Oil & Gas, Energy & Water Utilities, Remote Condition Monitoring and Control — basically anything that can establish a wide area connection. Moreover, these industrial networks are increasingly being considered critical infrastructure. In response to this expansion, Cisco has on-going innovations advancing the ways networks operate – and at the forefront of these trends is the way that SD WAN solutions enable and support industrial use cases.

Cisco Catalyst SD-WAN today is already an industry-leading wide area network solution offering a software-defined WAN solution that enables enterprises and organizations to connect users to their applications securely. It provides a software overlay that runs over standard network transports, including MPLS, broadband, and Internet, to deliver applications and services. The overlay network supports on-premises solutions but also extends the organization’s network to Infrastructure as a Service (IaaS) and multi-cloud environments, thereby accelerating their shift to the cloud.

Most utilities are used to building large networks utilizing technologies such as Internet Protocol Security (IPsec) and Dynamic Multipoint Virtual Private Network (DMVPN) to encrypt critical communications, Multiprotocol Label Switching (MPLS) for the underlying transport network, and public or private cellular for remote sites with no other WAN connectivity. Catalyst SD-WAN brings these technologies together and enables automation to greatly simplify deployments.

Automation benefits:

  • Secure Zero Touch deployment of field gateways (i.e., no field staff required to configure a gateway)
  • Simple provisioning of end-to-end service VPNs to segment traffic (SCADA, CCTV, PMU, IP Telephony, etc.)
  • Templated configurations making it easy to change configurations at scale and push it to gateways in the field.
  • Application of unified security policies across a diverse range of remote sites and equipment
  • Managing multiple backhaul connectivity options at the gateway including private MPLS for critical SCADA traffic and cellular for backup and even internet-based connections for non-critical traffic, where appropriate
  • Lifecycle management of gateways (e.g., firmware updates, alarm monitoring and statistics)

Cisco SD-WAN Validated Design for Distribution Automation (DA)


SD-WAN has origins as an enterprise solution using fixed edge routers of various performance capabilities and predictable enterprise traffic patterns. Utility networks present new challenges with especially when applied to Distribution network use cases:

  • Connectivity to legacy serial devices not supporting Ethernet/IP
  • communications (g., Modbus RTU, DNP3 over serial, IEC101 or vendor proprietary)
  • Mobility needs for mobile assets to ensure resilient wide area connectivity
  • New WAN interfaces including dual 4G or 5G cellular, DSL, fiber or Ethernet
  • The use of NAT to allow fixed privately addressed equipment to communicate
  • Requirement to encrypt SCADA traffic across the wide area network
  • Applicable to both distribution substations and field area networks
  • Segregation of services via VPNs in flexible topologies (Hub & Spoke, or Meshed [Fully or Partial])
  • Intelligent traffic steering across multiple backhaul interfaces when needed (critical vs. non-critical traffic)

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Key use Distribution Network use cases that the Cisco SD-WAN solution can address are:

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Cisco IoT Solutions have introduced a new Cisco Validated Design to address an SD-WAN architecture for Distribution Automation use cases. Leveraging the Cisco Catalyst IR1100 Rugged Series Routers as an SD-WAN router with flexible modular backhaul capabilities (DSL, Fiber, Ethernet, 4/5G, 450MHz LTE) and operating as an SD-WAN controlled edge router.

SD WAN Solutions, Cisco Certification, Cisco Exam, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Learning

Along the distribution network feeders, the IR1101 should be positioned as a Distribution Automation gateway. It can be easily mounted within a DA device cabinet (e.g. Recloser, Cap bank controller etc) and can be powered by the same DC supply (flexible 9-36VDC input). It also has extended environmental capabilities to cope with the variations in temperature, humidity, and vibration.

The new SD-WAN for Utility Distributed Automation Design Guide builds on other existing documents that describe in detail Cisco’s SD-WAN architecture and industrial IoT hardware offerings and shows how they can be combined to provide a scalable, secure network. The new Design Guide is focused on areas that are unique or at least emphasized by DA use cases in general. This document also has detailed configuration examples for many of the DA features.

Source: cisco.com

Thursday 26 October 2023

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

In the rapidly evolving automotive landscape, vehicles are no longer just machines that move us from point A to point B; they have transformed into complex, interconnected systems on wheels. With this transformation, API (Application Programming Interface) security has surged to the forefront of automotive design and engineering. APIs serve as critical gatekeepers, managing the flow of data between the vehicle’s Electronic Control Units (ECUs), external devices, and the cloud. But with great power comes great responsibility. If an API is compromised, the implications can range from privacy breaches to threats to passenger safety. Automotive API security, therefore, is not just a technological challenge, it’s a safety imperative. It’s a thrilling yet daunting frontier, where the lines between software engineering, cybersecurity, and automotive design blur, and where the stakes are high— ensuring the safe and secure transportation of millions, if not billions, of people around the world.

In parallel, the Internet of Vehicles (IoV) and Vehicle-to-Everything (V2X) continue to evolve. The IoV encompasses a range of technologies and applications, including connected cars with internet access, Vehicle-to-Vehicle (V2V) and Vehicle-to-Cloud (V2C) communication, autonomous vehicles, and advanced traffic management systems. The timeline for widespread adoption of these technologies is presently unclear and will depend on a variety of factors; however, adoption will likely be a gradual process.

Over time, the number of Electronic Control Units (ECUs) in vehicles has seen a significant increase, transforming modern vehicles into complex networks on wheels. This surge is attributed to advancements in vehicle technology and the increasing demand for innovative features. Today, luxury passenger vehicles may contain 100 or more ECUs. Another growing trend is the virtualization of ECUs, where a single physical ECU can run multiple virtual ECUs, each with its own operating system. This development is driven by the need for cost efficiency, consolidation, and the desire to isolate systems for safety and security purposes. For instance, a single ECU could host both an infotainment system running QNX and a telematics unit running on Linux.

ECUs run a variety of operating systems depending on the complexity of the tasks they perform. For tasks requiring real-time processing, such as engine control or ABS (anti-lock braking system) control, Real-Time Operating Systems (RTOS) built on AUTOSAR (Automotive Open System Architecture) standards are popular. These systems can handle strict timing constraints and guarantee the execution of tasks within a specific time frame. On the other hand, for infotainment systems and more complex systems requiring advanced user interfaces and connectivity, Linux-based operating systems like Automotive Grade Linux (AGL) or Android Automotive are common due to their flexibility, rich feature sets, and robust developer ecosystems. QNX, a commercial Unix-like real-time operating system, is also widely used in the automotive industry, notably for digital instrument clusters and infotainment systems due to its stability, reliability, and strong support for graphical interfaces.

The unique context of ECUs present several distinct challenges regarding API security. Unlike traditional IT systems, many ECUs have to function in a highly constrained environment with limited computational resources and power, and often have to adhere to strict real-time requirements. This can make it difficult to implement robust security mechanisms, such as strong encryption or complex authentication protocols, which are computationally intensive. Furthermore, ECUs need to communicate with each other and external devices or services securely. This often leads to compromises in vehicle network architecture where a highcomplexity ECU acts as an Internet gateway that provides desirable properties such as communications security. On the other hand, in-vehicle components situated behind the gateway may communicate using methods that lack privacy, authentication, or integrity.

Modern Vehicle Architecture


ECUs, or Electronic Control Units, are embedded systems in automotive electronics that control one or more of the electrical systems or subsystems in a vehicle. These can include systems related to engine control, transmission control, braking, power steering, and others. ECUs are responsible for receiving data from various sensors, processing this data, and triggering the appropriate response, such as adjusting engine timing or deploying airbags.

Driving API Security Forward: Protecting Vehicle-to-Cloud Communications

DCUs, or Domain Control Units, are a relatively new development in automotive electronics, driven by the increasing complexity and interconnectivity of modern vehicles. A DCU controls a domain, which is a group of functions in a vehicle, such as the drivetrain, body electronics, or infotainment system. A DCU integrates several functions that were previously managed by individual ECUs. A DCU collects, processes, and disseminates data within its domain, serving as a central hub.

The shift towards DCUs can reduce the number of separate ECUs required, simplifying vehicle architecture and improving efficiency. However, it also necessitates more powerful and sophisticated hardware and software, as the DCU needs to manage multiple complex functions concurrently. This centralization can also increase the potential impact of any failures or security breaches, underscoring the importance of robust design, testing, and security measures.

Direct internet connectivity is usually restricted to only one or two Electronic Control Units (ECUs). These ECUs are typically part of the infotainment system or the telematics control unit, which require internet access to function. The internet connection is shared among these systems and possibly extended to other ECUs. The remaining ECUs typically communicate via an internal network like the CAN (Controller Area Network) bus or automotive ethernet, without requiring direct internet access.

The increasing complexity of vehicle systems, the growing number of ECUs, and pressure to bring cutting edge consumer features to market have led to an explosion in the number of APIs that need to be secured. This complexity is compounded by the long lifecycle of vehicles, requiring security to be maintained and updated over a decade or more, often without the regular connectivity that traditional IT systems enjoy. Finally, the critical safety implications of many ECU functions mean that API security issues can have direct and severe consequences for vehicle operation and passenger safety.

ECUs interact with cloud-hosted APIs to enable a variety of functionalities, such as real-time traffic updates, streaming entertainment, finding suitable charging stations and service centers, over-the-air software updates, remote diagnostics, telematics, usage based insurance, and infotainment services.

Open Season


In the fall of 2022, security researchers discovered and disclosed vulnerabilities affecting the APIs of a number of leading car manufacturers. The researchers were able to remotely access and control vehicle functions, including locking and unlocking, starting and stopping the engine, honking the horn and flashing the headlights. They were also able to locate vehicles using just the vehicle identification number or an email address. Other vulnerabilities included being able to access internal applications and execute code, as well as perform account takeovers and access sensitive customer data.

The research is historically significant because security researches would traditionally avoid targeting production infrastructure without authorization (e.g., as part of a bug bounty program). Most researchers would also hesitate to make sweeping disclosures that do not pull punches, albeit responsibly. It seems the researchers were emboldened by recent revisions to the CFAA and this activity may represent a new era of Open Season Bug Hunting.

The revised CFAA, announced in May of 2022, directs that good-faith security research should not be prosecuted. Further, “Computer security research is a key driver of improved cybersecurity,” and “The department has never been interested in prosecuting good-faith computer security research as a crime, and today’s announcement promotes cybersecurity by providing clarity for good-faith security researchers who root out vulnerabilities for the common good.”

These vulnerability classes would not surprise your typical cybersecurity professional, they are fairly pedestrian. Anyone familiar with the OWASP API Security Project will recognize the core issues at play. What may be surprising is how prevalent they are across different automotive organizations. It can be tempting to chalk this up to a lack of awareness or poor development practices, but the root causes are likely much more nuanced and not at all obvious.

Root Causes


Despite the considerable experience and skills possessed by Automotive OEMs, basic API security mistakes can still occur. This might seem counterintuitive given the advanced technical aptitude of their developers and their awareness of the associated risks. However, it’s essential to understand that, in complex and rapidly evolving technological environments, errors can easily creep in. In the whirlwind of innovation, deadlines, and productivity pressures, even seasoned developers might overlook some aspects of API security. Such issues can be compounded by factors like communication gaps, unclear responsibilities, or simply human error.

Development at scale can significantly amplify the risks associated with API security. As organizations grow, different teams and individuals often work concurrently on various aspects of an application, which can lead to a lack of uniformity in implementing security standards. Miscommunication or confusion about roles and responsibilities can result in security loopholes. For instance, one team may assume that another is responsible for implementing authentication or input validation, leading to vulnerabilities. Additionally, the context of service exposure, whether on the public internet or within a Virtual Private Cloud (VPC), necessitates different security controls and considerations. Yet, these nuances can be overlooked in large-scale operations. Moreover, the modern shift towards microservices architecture can also introduce API security issues. While microservices provide flexibility and scalability, they also increase the number of inter-service communication points. If these communication points, or APIs, are not adequately secured, the system’s trust boundaries can be breached, leading to unauthorized access or data breaches.

Automotive supply chains are incredibly complex. This is a result of the intricate network of suppliers involved in providing components and supporting infrastructure to OEMs. OEMs typically rely on tier-1 suppliers, who directly provide major components or systems, such as engines, transmissions, or electronics. However, tier-1 suppliers themselves depend on tier-2 suppliers for a wide range of smaller components and subsystems. This multi-tiered structure is necessary to meet the diverse requirements of modern vehicles. Each tier-1 supplier may have numerous tier-2 suppliers, leading to a vast and interconnected web of suppliers. This complexity can make it difficult to manage the cybersecurity requirements of APIs.

While leading vehicle cybersecurity standards like ISO/SAE 21434, UN ECE R155 and R156 cover a wide range of aspects related to securing vehicles, they do not specifically provide comprehensive guidance on securing vehicle APIs. These standards primarily focus on broader cybersecurity principles, risk management, secure development practices, and vehicle-level security measures. The absence of specific guidance on securing vehicle APIs can potentially lead to the introduction of vulnerabilities in vehicle APIs, as the focus may primarily be on broader vehicle security aspects rather than the specific challenges associated with API integration and communication.

Things to Avoid


Darren Shelcusky of Ford Motor Company explains that while many approaches to API security exist, not all prove to be effective within the context of a large multinational manufacturing company. For instance, playing cybersecurity “whack-a-mole,” where individual security threats are addressed as they pop up, is far from optimal. It can lead to inconsistent security posture and might miss systemic issues. Similarly, the “monitor everything” strategy can drown the security team in data, leading to signal-to-noise issues and an overwhelming number of false positives, making it challenging to identify genuine threats. Relying solely on policies and standards for API security, while important, is not sufficient unless these guidelines are seamlessly integrated into the development pipelines and workflows, ensuring their consistent application.

A strictly top-down approach, with stringent rules and fear of reprisal for non-compliance, may indeed ensure adherence to security protocols. However, this could alienate employees, stifle creativity, and lose valuable lessons learned from the ground-up. Additionally, over-reliance on governance for API security can prove to be inflexible and often incompatible with agile development methodologies, hindering rapid adaptation to evolving threats. Thus, an effective API security strategy requires a balanced, comprehensive, and integrated approach, combining the strengths of various methods and adapting them to the organization’s specific needs and context.

Winning Strategies


Cloud Gateways

Today, Cloud API Gateways play a vital role in securing APIs, acting as a protective barrier and control point for API-based communication. These gateways manage and control traffic between applications and their back-end services, performing functions such as request routing, composition, and protocol translation. From a security perspective, API Gateways often handle important tasks such as authentication and authorization, ensuring that only legitimate users or services can access the APIs. They can implement various authentication protocols like OAuth, OpenID Connect, or JWT (JSON Web Tokens). They can enforce rate limiting and throttling policies to protect against Denial-of-Service (DoS) attacks or excessive usage. API Gateways also typically provide basic communications security, ensuring the confidentiality and integrity of API calls. They can help detect and block malicious requests, such as SQL injection or Cross-Site Scripting (XSS) attacks. By centralizing these security mechanisms in the gateway, organizations can ensure a consistent security posture across all their APIs.

Cloud API gateways also assist organizations with API management, inventory, and documentation. These gateways provide a centralized platform for managing and securing APIs, allowing organizations to enforce authentication, authorization, rate limiting, and other security measures. They offer features for tracking and maintaining an inventory of all hosted APIs, providing a comprehensive view of the API landscape and facilitating better control over security measures, monitoring, and updates. Additionally, cloud API gateways often include built-in tools for generating and hosting API documentation, ensuring that developers and consumers have access to up-to-date and comprehensive information about API functionality, inputs, outputs, and security requirements. Some notable examples of cloud API gateways include Amazon API Gateway, Google Cloud Endpoints, and Azure API Management.

Authentication

In best-case scenarios, vehicles and cloud services mutually authenticate each other using robust methods that include some combination of digital certificates, token-based authentication, or challenge-response mechanisms. In the worst-case, they don’t perform any authentication at all. Unfortunately, in many cases, vehicle APIs rely on weak authentication mechanisms, such as a serial number being used to identify the vehicle.

Certificates

In certificate-based authentication, the vehicle presents a unique digital certificate issued by a trusted Certificate Authority (CA) to verify its identity to the cloud service. While certificate-based authentication provides robust security, it does come with a few drawbacks. Firstly, certificate management can be complex and cumbersome, especially in large-scale environments like fleets of vehicles, as it involves issuing, renewing, and revoking certificates, often for thousands of devices. Finally, setting up a secure and trusted Certificate Authority (CA) to issue and validate certificates requires significant effort and expertise, and any compromise of the CA can have serious security implications.

Tokens

In token-based authentication, the vehicle includes a token (such as a JWT or OAuth token) in its requests once its identity has been confirmed by the cloud service. Token-based authentication, while beneficial in many ways, also comes with certain disadvantages. First, tokens, if not properly secured, can be intercepted during transmission or stolen from insecure storage, leading to unauthorized access. Second, tokens often have a set expiration time for security purposes, which means systems need to handle token refreshes, adding extra complexity. Lastly, token validation requires a connection to the authentication server, which could potentially impact system performance or lead to access issues if the server is unavailable or experiencing high traffic.

mTLS

For further security, these methods can be used in conjunction with Mutual TLS (mTLS) where both the client (vehicle) and server (cloud) authenticate each other. These authentication mechanisms ensure secure, identity-verified communication between the vehicle and the cloud, a crucial aspect of modern connected vehicle technology.

Challenge / Response

Challenge-response authentication mechanisms are best implemented with the aid of a Hardware Security Module (HSM). This approach provides notable advantages including heightened security: the HSM provides a secure, tamper-resistant environment for storing the vehicle’s private keys, drastically reducing the risk of key exposure. In addition, the HSM can perform cryptographic operations internally, adding another layer of security by ensuring sensitive data is never exposed. Sadly, there are also potential downsides to this approach. HSMs can increase complexity throughout the vehicle lifecycle. Furthermore, HSMs also have to be properly managed and updated, requiring additional resources. Lastly, in a scenario where the HSM malfunctions or fails, the vehicle may be unable to authenticate, potentially leading to loss of access to essential services.

Hybrid Approaches

Hybrid approaches to authentication can also be effective in securing vehicle-to-cloud communications. For instance, a server could verify the authenticity of the client’s JSON Web Token (JWT), ensuring the identity and claims of the client. Simultaneously, the client can verify the server’s TLS certificate, providing assurance that it’s communicating with the genuine server and not a malicious entity. This multi-layered approach strengthens the security of the communication channel.

Another example hybrid approach could leverage an HSM-based challenge-response mechanism combined with JWTs. Initially, the vehicle uses its HSM to securely generate a response to a challenge from the cloud server, providing a high level of assurance for the initial authentication process. Once the vehicle is authenticated, the server issues a JWT, which the vehicle can use for subsequent authentication requests. This token-based approach is lightweight and scalable, making it efficient for ongoing communications. The combination of the high-security HSM challenge-response with the efficient JWT mechanism provides both strong security and operational efficiency.

JWTs (JSON Web Tokens) are highly convenient when considering ECUs coming off the manufacturing production line. They provide a scalable and efficient method of assigning unique, verifiable identities to each ECU. Given that JWTs are lightweight and easily generated, they are particularly suitable for mass production environments. Furthermore, JWTs can be issued with specific expiration times, allowing for better management and control of ECU access to various services during initial testing, shipping, or post-manufacturing stages. This means ECUs can be configured with secure access controls right from the moment they leave the production line, streamlining the process of integrating these units into vehicles while maintaining high security standards.

Source: cisco.com

Saturday 21 October 2023

Unlocking Success in the Digital Landscape: Deloitte and Cisco

Unlocking Success in the Digital Landscape: Deloitte and Cisco

For more than twenty years, Deloitte and Cisco have been dedicated to creating meaningful results for our mutual clients in the ever-evolving digital landscape. By combining Cisco’s market presence with Deloitte’s expertise, we deliver scalable, adaptable solutions tailored to support unique digital transformation objectives. Our work together merges Cisco’s leadership in IT infrastructure, security, and related sectors with Deloitte’s expertise in digital transformation, analytics, and cloud services.

Our clients access Deloitte’s customized solutions across its consulting, advisory, and tax divisions, leveraging profound industry insights to achieve their business goals. We identify the optimal blend of products and services that integrate seamlessly with their environments, driving digital transformation in areas such as full-stack observability (FSO), connected factories, the future of work, sustainability, and security.

Improving application performance with Full-Stack Observability (FSO)


Deloitte and Cisco collaborate to offer FSO to SAP and AWS clients. By integrating Cisco AppDynamics (AppD) with other monitoring tools, we enhance application performance and drive better business outcomes. AppD addresses SAP visibility and performance concerns at the advanced business application programming (ABAP) code level to provide essential visibility data.

Our efforts offer several benefits to our clients, including:

  • Real-time discovery and visualization of all SAP components and their dependencies, helping to ensure a comprehensive view during SAP cloud migrations
  • Reduction of inter-team conflicts by providing a unified source of truth for application performance that bridges the gap between development, operations, and SAP Basis teams
  • Code-level visibility into SAP ABAP and connected non-SAP applications, expediting root cause analysis and performance issue identification
  • Establishing baseline health and performance metrics for applications before cloud migration, simplifying issue detection at every migration stage and validating success by comparing pre- and post-migration metrics

Enabling Industrial Smart Operations


In the age of Industry 4.0, leading organizations are transitioning from traditional to digital supply network operations. Traditional linear supply chains lack agility and efficiency, inhibiting broader revenue streams and profit growth. The Digital Supply Network is a dynamic system that incorporates ecosystem partners and leverages digital technologies such as predictive algorithms and real-time IoT sensor data.

Cisco supports secure data collection with sensors, multiaccess wireless, and a cybersecurity platform. Deloitte provides industry expertise and helps transform supply chains through strategy, implementation, cloud operations, and AI solutions.

The outcome? Industrial Smart Operations, delivered by Deloitte and Cisco.

Driving a sustainable future of work

Deloitte and Cisco collaborate to create the contemporary employee experience while reducing the global carbon footprint of offices and factories.

This transformation is influenced by four important trends:

  • Expansion of a remote hybrid workforce
  • Reevaluation of real estate requirements and evolving workplace dynamics
  • Increasing integration of artificial intelligence and telemetry in workplaces
  • Organizational imperative to embrace a broad range of energy efficiency measures in support of net-zero carbon emissions objectives

Fueled by rapid connectivity, innovative talent models, and cognitive technologies, Deloitte and Cisco are actively shaping the modern employee experience while simultaneously driving environmental sustainability in offices and factories worldwide.

Enhancing security postures


Together, we help our mutual clients confidently transform their cyber and strategic risk mitigation programs and reduce overall risk exposure. Deloitte also works with Cisco to integrate the Cisco Security Portfolio into the Deloitte Zero Trust PRISM Financial Risk asset. This zero-trust cybersecurity posture makes it possible to create more robust and resilient security, simplify security management, improve the end-user experience, and allow customers to incorporate cyber risk elements into their overall risk exposure.

Making an impact that matters


Cisco’s cutting-edge cybersecurity technology, industrial IoT, network transformation, collaboration and observability solutions, and SD-WAN, combined with Deloitte’s distinguished professional services, provide significant value to customers around the world.

Source: cisco.com

Thursday 19 October 2023

Forecasting Capacity in Cisco Catalyst SD-WAN

Organizations are increasingly adopting software-defined wide area networks (SD-WAN) to enhance network performance, reduce costs, and improve overall connectivity.

Using artificial intelligence (AI) and machine learning (ML) for IT operations (AIOps), Cisco SD-WAN enhances and simplifies network management by using predictive analytics based on AI and ML techniques. The result is a proactive tool to address potential network issues before they degrade network and application performance.

Features desired by networks operators for such proactive actions include:


  • Predictive Path Recommendations (PPR), which suggests preferred paths for various application groups at each site within an overlay based on long-term modeling of path quality.
  • Bandwidth forecast for capacity planning, giving operators insights into possible future network usage based on extensive past usage patterns.
  • Anomaly detection for network KPIs (tunnel loss, latency, jitter), application usage patterns with individual sites, and user application usage profile.
  • Application modeling to help network operators better understand the impact of rolling out new applications in the overlay so they can implement the correct policies for best performance and minimal impact.

We discussed PPR and demonstrated how it gives operators the best performance for applications on their fabric. In today’s post we will delve into Bandwidth Forecast. To fully leverage the benefits of SD-WAN, effective capacity planning is crucial to help ensure optimal network performance, less downtime, improved cost control, more seamless operations, and a superior user experience.

The Bandwidth Forecast feature takes a comprehensive approach to provide accurate predictions of circuit usage, providing visibility into which circuits are likely to breach the capacity threshold based on the predicted usage. This helps network operators monitor usage trends on the circuits and provides capacity planning for the overlay.

The forecasting is primarily based on the RX/TX bandwidth information of circuits in the WAN fabric. To ensure insights use underlying long-term trends, the circuit usage data is aggregated as daily data points while tracking daily Min/Max ranges. Aggregated data over extended periods is used to generate a forecast for up to three months in the future.

Various other features within this data set can be further leveraged to enhance forecast accuracy. These include:

  • Type of circuit (e.g., MPLS, private internet, LTE)
  • Type of applications using the circuit (i.e., top 10 applications and their respective volume)
  • Number of users at the site served by the circuit
  • Regional holiday list and bandwidth information features

To achieve the best forecast possible, a combination of common predictors and those based on deep learning techniques are used to generate more reliable and robust forecasts.

Forecasting Capacity in Cisco Catalyst SD-WAN
Pre-processing of interface statistics for training and inference pipeline (Click image to enlarge)

Forecast quality is continuously monitored for accuracy. If any data or model drift or deviation from expected results is observed, retraining of the model is triggered based on updated data sets to improve model accuracy. Furthermore, forecasts are assessed for long-term overestimation or underestimation, ensuring that it faithfully predicts the bandwidth to assist network operators in capacity planning and decision-making process.

The Bandwidth Forecast feature in Cisco SD-WAN Analytics helps give network operators a better understanding of the following:


  • Growth Trends: By analyzing historical data presented side by side with the forecast, organizations can identify patterns and anticipate future bandwidth demands. This empowers them to plan for anticipated growth without disruptions.
  • Seasonality: Long-term visibility into seasonality of usage over the historical period over which the training data set is derived from. The daily, weekly, and monthly seasonality is also factored in while making the forecast and the pattern continues into the forecasted data points.
  • Surge: Although visibility is provided into historical surge usage in the overlay so network operators can correlate it to global events (e.g., Black Friday) or internal events (e.g., company all-hands video stream), the model is effective in minimizing the impact of such data points while making long-term forecasts.
  • Min/Max Band: The daily data points for forecast has three components, Min, Mean, and Max. The forecast is presented with emphasis on the daily mean value while still showing a Min/Max Band so that the network operators can get insights into usage spikes within the day.
  • Model/Forecast Performance: Historical usage data is presented along with the past forecast data points for a quick visual comparison of how the forecast performed against actual recorded values in the past.

User interface


The Bandwidth Forecast feature can be activated for a specific overlay in the Catalyst SD-WAN Analytics Dashboard. This appears under the “Predictive Network” tab. Users can choose the circuits in the overlay for the forecast generation.

A table of circuits with all related metrics such as site or provider info, RX/TX bandwidth, and total usage is displayed, helping users select the circuits for which they want to visualize Bandwidth Forecast details. The minimum data set requirement for forecasts to be generated is 12 weeks of historical daily data points for each circuit.

The workflow is subject to the following:


  • The table shows only circuits configured on physical interfaces and this will exclude any circuits configured on logical interfaces (e.g., sub-interfaces, loopback, dialer-group).
  • Default sorting is based on descending order of RX/TX bandwidth, which helps bubble most heavily used circuits to the top of the table. The chart display is used to show the forecast for the Top Circuit.
  • Users can select any other circuit by clicking on the checkbox.
  • Users can search and sort as they wish to isolate specific circuits of interest.

Forecasting Capacity in Cisco Catalyst SD-WAN
Table of circuits and their metrics [Click image to enlarge]

Forecasting Capacity in Cisco Catalyst SD-WAN
Bandwidth Forecast for selected circuit showing actual and predicted (dotted) values [Click image to enlarge]

Metrics


Accurate bandwidth forecasting is critical in capacity planning. One key metric is the accuracy of the forecasted bandwidth requirements. A successful forecast should closely align with the actual capacity goals for your business. The current solution computes mean absolute percentage error (MAPE) and mean absolute scaled error (MASE) scores in addition to tracking percentiles. Any of these can be used as the optimization target for the predictors used. The choice of target metrics for the predictors can be specified as per the needs for a specific overlay or use case.

By accurately predicting bandwidth requirements, organizations can optimize traffic routing, provision appropriate link capacities, manage QoS effectively, plan for scalability, and ensure adherence to SLAs. This proactive approach enables businesses to leverage the full potential of SD-WAN, delivering enhanced network performance, improved user experiences, and the ability to adapt to changing business needs. As organizations embrace the digital transformation journey, incorporating bandwidth forecast in SD-WAN capacity planning becomes a key strategy for success.

Source: cisco.com

Tuesday 17 October 2023

Enabling a new generation of AI with Ethernet

Cisco Career, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Material, Cisco AI, Cisco Ethernet

Few transformational technologies have generated the kind of buzz as Artificial Intelligence. Organizations around the world are finding exciting new ways to collaborate, manage applications, enhance their digital security, and reimagine customer experiences. Cisco already harnesses AI-powered capabilities across our product and customer service portfolio. We are also leading innovation that enables AI infrastructure. Our vision for AI Network fabrics offers unparalleled programmability, performance, flexibility, and efficiency. And we enable these fabrics with a familiar technology that has broad ecosystem support — Ethernet.

At the foundation of every AI generated product capability will be ground-breaking infrastructure. GPUs and massively scalable data stores and compute will be deployed in AI/ML platforms of unprecedented scale and performance. This massive movement and processing of data will need a network fabric that meets the considerable demands of these platforms. These networks need to enable AI with economical, flexible, and innovative connectivity. An economical and flexible platform enables you to invest early and evolve in a way that fits your unique business model. Innovation offers you a fabric that is high throughput, low-latency and lossless — without sacrificing flexibility. As a result of Cisco’s considerable investments, that AI fabric is already here. And we believe that this fabric needs to be built on Ethernet.

AI Fabrics will be Ethernet-based


Ethernet offers advantages of economy and scale. As transformational technologies approach maturity, customers tend to move away from sole-source components towards multi-source components, to improve purchasing power. In the case of AI/ML infrastructure, this means a move away from single-source connectors, fabric, and GPUs. Moving towards multi-source components typically means using a broadly adopted standard, and it is difficult to find a more broadly deployed data center network technology than Ethernet. This increases the ability to find the right ROI with interoperability within component types, as well as between component types. Additionally, familiar and consistent technology for all data center fabrics produces efficiencies in training, procurement, and support. Lastly, Ethernet is well-placed to support the massive scale that the world’s largest AI network fabrics require, with a broad base of suppliers.

Every organization’s AI journey is unique and requires agile technology and the ability to scale on your terms. Whether you are retraining models, deploying model inference, or building your own large language model, you need infrastructure that keeps pace with your demands. Moving quickly means leveraging the skills that you have and deploying familiar technology. Ethernet is the most deployed infrastructure and has the broadest architectural support. You can find your choice of partners, suppliers, and architects to help your unique organizational goals.

AI/ML infrastructure will need to be set up for innovation that drives greater capabilities and performance. Ethernet has a proven history of disruptive innovation. In the approximately 20-year period following 1998, Ethernet standard speeds grew from 1Gbps to 400Gbps, with 800G options available in systems such as the Cisco 8111-EH and the Cisco Nexus 9232E switch. Ethernet’s versatility has been proven in data center networks through protocols such as iSCSI and RoCE; along with several innovations that reduce the need for separate types of network connectivity.

This impressive rate of growth, combined with flexibility, has made it possible to connect several generations of front-end and back-end components. Storage networks that once required specialized back-end fabrics can now connect resources with Ethernet through advances in IP based storage solutions. Similarly, resources requiring InfiniBand connectivity can be deployed with RoCE. This means that back-end networks for advanced compute or GPUs can now be consolidated with a single network fabric.

Cisco’s unique value for Ethernet fabrics


Cisco expands the value that you can achieve from Ethernet by advancing the economics, agility, choice and innovation even further. We make it possible to standardize switching and routing with a single silicon architecture — Cisco Silicon One. This gives you the option of reducing the complexity of your network architectures by managing one experience across the entire network, across all network functions. To maximize economic benefits, Cisco produces a full range of silicon, systems and optics in a way that transfers economies of scale to the end user. We build silicon with Customer Owned Tooling (COT), rather than using traditional ASIC models, or off shelf components, so that we can eliminate the premiums introduced when adding additional vendors to a value chain — and drive end to end innovations.

Cisco offers you the choice of procuring components, white boxes, or fully built systems in your AI Network fabric. Does your business model drive value by building systems and software on top of silicon components? Or do you want ease of deployment by having fully built systems arrive ready for use? Alternatively, you may be in the middle — needing a white box shipped to you with the option to customize software that is unique to your own platform. Whichever model you need, you can partner with Cisco.

Our development of silicon offers differentiated capabilities for Ethernet that truly unlock the power of AI Network fabrics. With one architecture, you can build three deployment models that offer increasing degrees of efficiency for precious system resources through multipathing. You may choose to use standard Ethernet for the widest interoperability solution, or an advanced scheduled Ethernet that halves the Job Completion Time (JCT) compared to standard Ethernet, or the middle ground with enhanced Ethernet taking the best from standard and scheduled Ethernet. Because of the innovation and power of Cisco Silicon One, you can confidently use the same network for high throughput, lossless training fabrics, and ultra-low latency inference fabrics.

Conclusion

Ethernet is ready for the next technology transformation. Whether you are fine-tuning your own models, running off-the-shelf models for inference, or building your own large language models, you can be assured that you have a fabric that meets your needs. Cisco will partner with you so that you can build your AI infrastructure in a way that matches your business model.

Source: cisco.com

Saturday 14 October 2023

Securing the Modern Hyper-Distributed Network: Perspectives from the 2023 Gartner Magic Quadrant for SD-WAN

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides

A typical day’s tasks for today’s modern worker are frequently distributed across multiple devices, applications, and locations. They could be working from home, analyzing CRM dashboards, and later, they might be at a coffee shop reviewing slides for an upcoming customer meeting. Perhaps they then head into the office for team meetings, followed by catching up with emails and messages on the commute home.

For a networking and security leader, a typical day looks very different. Those individuals need to ensure that the WAN is delivering a superior app performance connecting users to applications wherever they are. They also need to know if an untrusted device is being used to access confidential CRM dashboards. How is network traffic being secured outside the office? How are apps and services being accessed and secured?

Multiply these security concerns by the number of employees at numerous office locations, and then factor in technology-led business transformation initiatives, and we start to understand the complexity facing IT to secure and connect hyper-distributed users and resources everywhere.

Choose the right security


We hear you loud and clear—security and high performance are top priorities. In the face of constant change and increasing complexity—especially over the WAN—organizations must implement security technologies that converge with their SD-WAN, enforcing them as close as possible to users and workloads. For the most effective implementation, this will require security hosted on-premises and in the cloud that ensures the best possible app performance.

The importance of security with SD-WAN was acknowledge by Gartner in its recently published 2023 Magic Quadrant for SD-WAN report, which provides an annual evaluation of the SD-WAN market for IT leaders. We feel this year’s report includes the most thorough assessment of security capabilities—hosted on-premises and in the cloud—since Gartner Magic Quadrant for SD-WAN began.

In 2023, Cisco was named a Leader for the fourth consecutive year.

At Cisco, we work closely with our customers and partners to better understand their challenges so we can build products and solutions that support their long-term goals. These continued partnerships provide us with the insight to deeply ingrain advanced security technologies into Cisco SD-WAN.

  • The right security: Stateful firewall, intrusion detection systems (IDS), intrusion prevention systems (IPS), advanced malware protection (AMP), URL filtering, HTTPS inspection, data loss prevention (DLP), cloud access security broker (CASB), and more—are all natively informed by the world’s largest commercial threat intelligence team, Talos.
  • Hosted in the right place: On-premises or in the cloud (native or third party) hosting ensures that security policies are enforced closely to workloads and users.
  • SASE your way: WAN appliances provide the building blocks to effortlessly chart your own journey.

Seek real-world validation


With a highly dense market of network security technologies and products to choose from, understanding which solutions will perform best for your environment and be the right long-term strategic fit can be confusing. While there is no substitute for testing solutions in a production environment, independent testing that mirrors real-world conditions can help identify top performers and refine a shortlist.

Miercom, a leading independent product test center, conducted a thorough evaluation of Cisco’s security and SD-WAN technologies delivered through Cisco Catalyst and Meraki WAN appliances. These tests were meticulously designed to match real-world conditions as closely as possible, instead of a theoretical laboratory environment.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides
Figure 1. According to leading independent product test center Miercom, Cisco’s malware efficacy is 25% better than the industry average. Across 11 malware exploit categories, Cisco averaged 98% malware efficacy.

Maximize your WAN


The WAN is central to an organization’s success. In addition to an uncompromising commitment to security, we continue to push Cisco SD-WAN beyond traditional expectations to help IT leaders maximize the potential of the WAN for their business through:

  • Delivering high performance, irrespective of where users and workloads live, to provide a superior experience wherever users and workloads are.
  • Simplifying cloud migration with integration and streamlined workflows for AWS Cloud WAN and Microsoft Azure Virtual WAN.
  • Enabling secure, long-term remote work strategies with Meraki Z4 and Catalyst CG113 secure teleworker gateways.
  • Providing continuous visibility across all the hyper-distributed internal and external domains with instant activation of Cisco ThousandEyes, which leverages predictive patch recommendations (PPR) to deliver proactive feedback, enhancing the user experience for critical application performance across the SD-WAN fabric.
  • Enabling agile business models using 5G fixed wireless access through indoor and outdoor Meraki MG51 and Catalyst CG522 cellular gateways.

Build a long-term strategy for simplicity


At Cisco, we’re committed to helping organizations simplify IT. Our vision is to create a simpler network management platform experience to help customers easily access and manage Cisco networking products from one place—the Cisco Networking Cloud.

The distribution of users and resources will continue to evolve along with the IT landscape, creating new complexities along the way. Simplifying the IT experience enables IT to better automate, analyze, and diagnose issues—supporting a framework that is well-positioned to evolve alongside the modern hyper-distributed network and helping to secure and connect hyper-distributed users and resources, no matter where they are located.

Source: cisco.com

Thursday 12 October 2023

End-to-End Visibility and Actionable Insights Underpin Great Connected Experiences

Three networking megatrends have upended how businesses approach networking to support the distributed workforce.

First, cloud has become the new data center, with workloads moving from on-premises to hybrid cloud and multicloud architectures. Secondly, the internet is now the new network, with reliance on business connectivity traversing diverse networking domains. And lastly, with so many remote and hybrid workers, the office is now essentially anywhere.

This evolution has made delivering a high-quality, reliable experience—connecting everyone to everything everywhere—significantly more complex. After the need to provide secure access to applications across multiple clouds, the second biggest challenge cited by 37% of respondents in our 2023 Global Networking Trends Report was gaining end-to-end visibility into network performance and security as more traffic originates or terminates beyond the boundaries of the corporate network.

Which begs the question: How do you identify, diagnose, and remediate problems that occur throughout the digital supply chain—the domains within and outside your infrastructure and all hops between a user’s device and an application or service in the cloud? Read on to find out how.

Tackling assurance complexity across multiple network domains


Great connected experiences are table stakes for businesses today. The digital economy relies on always-on applications and services to support employees and consumers. Failure is not an option.

Prior to the hyperconnectivity of today’s digital economy, business applications and services within corporate domains were well served by network monitoring solutions and processes that were localized and handled specific domains like wireless. But to remediate issues in enterprise WANs, admins had to contact their counterparts within cloud and internet provider organizations to jointly diagnose and remediate service and security problems. Often, this resulted in a lot of finger pointing. Businesses acted reactively instead of proactively. Issues could take a long time to get resolved.

Providing network assurance for a high-quality connected experience today requires end-to-end visibility and insights across diverse clouds, network providers, the internet, devices, and geographies—each with their own operational domains (see Figure 1). Without end-to-end visibility into network performance, application responsiveness, and security, it is extremely challenging for IT teams to deliver consistent digital experiences to end users.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials
Figure 1. Complex digital supply chain with interdependencies, increased failure surface, and unpredictability (click to enlarge)

A person working from home, for example, might run into a problem with Slack. The wireless network in their home office would be connected to an access network that would be connected to an edge router traversing a cloud network to the Slack application. Domain-specific tools can only see a small segment of this traffic. Admins without end-to-end visibility can’t see the big picture.

End-to-end visibility is foundational for SASE


A majority (51%) of organizations in our 2023 Global Networking Trends Report said that with their adoption of more software-as-a-service (SaaS) and multicloud solutions, they see investment in a solution that provides end-to-end visibility as a top priority. This may be in response to recent research by the Uptime Institute that found third-party operators—including cloud, hosting, colocation, and telecom providers—accounted for 70% of all publicly reported outages.

End-to-end visibility, analytics, and operational workflows allow admins to take decisive action to proactively remediate connectivity issues. In a secure access service edge (SASE) architecture, for example, end-to-end visibility feeds the actionable intelligence used to optimize path selection to provide the best digital experience anywhere at any time. Reliable connectivity is foundational to securely connecting people and things in a SASE architecture. If connectivity is poor, the secure access experience will be degraded.

Even before an SD-WAN or a converged SASE architecture with security service edge (SSE) is rolled out, organizations can use end-to-end visibility to evaluate, compare, and optimize the network experience before and after adoption of these architectures. The performance of individual providers in different locations that each form part of a digital supply chain can be proactively tested and benchmarked, with the results used to make more informed vendor selections to ensure the delivery of always-on digital experiences.

Gaining visibility into every connection


A European airline transitioned its network infrastructure from MPLS to SD-WAN, moving many applications and services to the cloud. The company needed to make sure that services met agreed-upon service level agreements (SLAs). To do so, the IT department deployed end-to-end visibility, specifically to monitor and enhance the digital experiences of customers and employees. With this solution in place, the airline can now measure connection latency and other factors—with a specific focus on connections between its data center and the cloud provider, Amazon Web Services. They can continually monitor and prioritize network experiences by accelerating incident response times, introduce more proactive maintenance, and enjoy greater cost efficiency through streamlined troubleshooting.

RichRelevance, a customer experience personalization provider for 250 global retailers, reduced its outages by 88% and shrunk outage windows from an average of four hours to 30 minutes, all thanks to end-to-end visibility. IT service management software company ServiceNow identified network issues 95% faster for their customers with visibility across all network layers that focused on the application experience.

Enabling quality digital experiences through a networking platform approach


Cisco is pioneering end-to-end network visibility and driving exceptional experiences through operational simplicity. It’s a cornerstone of our Cisco Networking Cloud long-term vision, a unified management experience platform for on-premises and cloud operating models to reduce IT complexity.

End-to-end visibility relies on compute power to capture and analyze billions of daily measurements in the digital supply chains that comprise today’s enterprise networks (see Figure 2). It is a powerful and indispensable feature that helps organizations maintain top-quality digital experiences and move from reactive to preventative and automated operations.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Learning, Cisco Tutorial and Materials
Figure 2. Organizations need to leverage a platform-driven approach that drives end-to-end visibility throughout the digital supply chain (click to enlarge)

Tuesday 10 October 2023

Building a transparent Notification Center to Enable Customer Control

Personalization is critical to a guided customer experience. It helps build trust, foster relationships, and enables a deeper connection with customers.

At Cisco, we have been trying to help our customers along each step of their post-sale experience for nearly a decade. And as a key part of that experience, we want our customers to have more control over what communications they receive – a more intentional step towards the right message, right person, right time goal that we are all striving to achieve.

Before we could begin, we took a thorough inventory of what exactly the post-sale experience for customers today looked like.

Evaluating a disconnected customer experience


Over the years, we’ve built several programs where customers could sign up for various post-sale notifications to help guide them on their path to success – but they were fragmented and lacked transparency.

Cisco Certification, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Guides, Cisco Preparation
One of the customer pages from the legacy experience

For instance, a customer could access a link via an email where they could enroll or unenroll from a specific Cisco product architecture. There was no way to access the link again if the customer changed their mind after unenrolling. It also was not totally clear to the customer exactly what they were unenrolling from.

Similarly, a customer could enroll in a digital journey from a form on the main website, Cisco.com, but they could not see what else they were subscribed to. There were 6+ programs of this nature that evolved over the years – each designed to help provide the customer more control over their experience, but lacking a critical ingredient – transparency.

Thus, began an initiative to build a Notification Center that was flexible, centralized, and personalized just for what a customer was eligible to receive. One tool for a customer to rule their post-sale experience.

Rooted in research


We built the Notification Center collaboratively with our customer research and design team, evaluating all the different existing programs we had, we defined MVP parameters that would enable us to evolve the data model to support a more cohesive experience. We experimented with design, naming conventions, login experiences and more. Each piece of feedback helped our design team iterate and ultimately finalize the MVP requirements so our Orchestration & Notification team could build out the digital experience.

The research as well as consultation with Forrester served as the foundation and guiding principles as we went through the development process. These principles included:

  • Build an experience that fosters trust and respects customer privacy and choices​
  • Collect only data we can act on​ – do not collect unnecessary data
  • Design scalability and flexibility, between MVP to future platform​s
  • Design consistency ​
  • Configurable UI that can be personalized based off of customer eligibility for products and services
  • Flexible data model that can handle changing products and services
  • Strict adherence to Cisco data security and privacy standards

The new interface replaces two of our previous data collection customer experiences that were linked in our emails. Now customers have full access to:

  • View all subscriptions associated with their email
  • Activate/Inactivate subscriptions for Renewals, Services, and Adopt Emails at the Use Case or Solution level
  • Continue to nominate contacts for respective subscriptions
  • Provide feedback on the experience directly to the experience design team

This new system supports all of our critical integrations with Snowflake, Salesforce Marketing Cloud (SFMC), Cisco Single Sign On, and it can be integrated across other channels as well.

Implementation Changes


This new approach to subscription management not only transformed the front-end customer experience, but it also changed the granularity of data we were collecting. To enable it, we designed an entirely new back-end process to support the front-end application. We also had to make some significant changes to the data model and our custom activities in SFMC.

Cisco Certification, Cisco Prep, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Guides, Cisco Preparation
The new experience design

  • The Notification Center UI, built on an SFMC Cloud Page, is supported by a Python-based Flask API, acting as an intermediary connecting the front-end with the backend database.
  • We made the strategic decision to use PostgreSQL as our backend database, hosted on Google Cloud Platform’s Cloud SQL instance, to replace SFMC’s native Data Extension for storing customer choices and Custom Activity log data. We chose this because of the advanced data capabilities, indexing options, ACID compliance for data integrity, trigger support, and scalability.
  • The database shift significantly reduced our reliance on SFMC as a database. This change decreased the overall number of SFMC API calls from 18 to 13 and increased the Custom Activity processing efficiency from 52 to 70 requests per second while concurrently reducing latency from 60 seconds to approximately 13 seconds.
  • Digital journeys executed through SFMC previously had Cisco product architecture level entry criteria, meaning customers qualified for journeys if they bought a particular product. With the introduction of Notification Center data, we are mapping at the use case level, so we can build our journey segments based on the particular reason a customer bought a product. This transition has increased the granularity of our data while enabling a more personalized customer experience.
  • Additionally, we enabled a daily sync between the Notification Center customer database and Enterprise Use Case Eligibility data to ensure Notification Center UI displays content in accordance with each customer’s eligibility criteria for a specific use case.

Source: cisco.com