Thursday, 8 June 2023

Empowering an extensible observability ecosystem with the Cisco Full-Stack Observability Platform

Cisco Career, Cisco Skill, Cisco Job, Cisco Learning, Cisco Tutoria and Materials, Cisco Preparation

Businesses today are digitally led, providing experiences to their customers and consumers through applications. The environments these applications are built upon are complex and evolve rapidly — requiring that IT teams, security teams, and business leaders can observe all aspects of their applications’ performance and be able to tie that performance to clear business outcomes. This calls for a new type of platform that can scale as a business scale and easily extend across an organization’s entire infrastructure and application lifecycle. It’s critical for leaders to have complete visibility, context and control of their applications to ensure their stakeholders — from employees to business partners to customers — are empowered with the best experiences possible.

What is Cisco Full-Stack Observability (FSO) Platform?


The Cisco FSO Platform is an open and extensible, API-driven full stack observability (FSO) platform focused on OpenTelemetry and anchored on metrics, events, logs, and traces (MELT), providing AI/ML driven analytics as well as a new observability ecosystem delivering relevant and impactful business insights through new use-cases and extensions.

Benefits of The Cisco FSO Platform


Cisco’s FSO Platform is future-proof and vendor agnostic, bringing data together from multiple domains — including application, networking, infrastructure, security, cloud, sustainability — and business sources. It is a unified observability platform enabling extensibility from queries, data ingestion pipelines and entity models all the way to APIs and a composable UI framework.

This provides Cisco customers with in-context, correlated, and predictive insights which enables them to reduce time to resolve issues, optimize their own users’ experiences, and minimize business risk — all with the additional flexibility to extend the Cisco FSO Platform’s capabilities with the creation of new or customized business use cases. This extensibility unleashes a diverse ecosystem of developers who can create new solutions or build upon existing ones to rapidly add value with observability, telemetry, and actionable insights.

Cisco Career, Cisco Skill, Cisco Job, Cisco Learning, Cisco Tutoria and Materials, Cisco Preparation
Cisco FSO Platform Diagram

First Application on the Cisco FSO Platform – Cloud Native Application Observability


Cloud Native Application Observability is a premier solution delivered on the Cisco FSO Platform. Cisco’s extensible application performance management (APM) solution for cloud native architectures, Cloud Native Application Observability with business context – now on the Cisco FSO Platform – helps customers achieve business outcomes, make the right digital experiences related decisions, ensure performance alignment with end-user expectations, prioritize, and reduce risk while securing workloads.

The following are some of the modules built on Cisco FSO Platform that work with Cloud Native Application Observability.

Modules built by Cisco

Cost Insights: This module provides visibility and insights into application-level costs alongside performance metrics, helping businesses understand the fiscal impact of their cloud applications. It leverages advanced analytics and automation to identify and eliminate unnecessary costs, while also supporting sustainability efforts.

Application Resource Optimizer: This module provides deeper insights into a Kubernetes workload and provides visibility into the workload’s resource utilization. It helps to identify the best candidates for optimization—and reduce your resource utilization. Running continuous AI/ML experiments on workloads, the Application Resource Optimizer creates a utilization baseline, and offers specific recommendations to help you improve. It analyzes and optimizes application workloads to maximize resource usage and reduce excessive cloud spending.

Security Insights: This module provides Business Risk Observability for cloud-native applications. It provides cloud native infrastructure insights to locate threats and vulnerabilities; runtime data security to detect and protect against leakage of sensitive data; and business risk prioritization for cloud security. By integrating features from our market-leading portfolio of security solutions, security and application teams have expanded threat visibility, and the intelligent business risk insights to respond in real-time to revenue-impacting security risks and reduce overall organizational risk profiles.

Cisco AIOps: This module helps to visualize contextualized data relevant to infrastructure, network, incidents, and performance of a business application, all in one place. It simplifies and optimizes the IT operations needs and accelerates time-to market for customer-specific AIOps capabilities and requirements.

Modules built by Partners

Evolutio Fintech: This module helps to reduce revenue losses for financial customers resulting from credit card authorization failures. It monitors infrastructure health impact on hourly credit card authorizations aggregated based on metadata like region, schemas, infra components and merchants.

CloudFabrix vSphere Observability and Data Modernization: This module helps to observe vSphere through the FSO platform and enriches vShpere and vROps data with your environment’s Kubernetes and infrastructure data.

Kanari Capacity Planner and Forecaster: This module provides insights into infrastructure risk factors that have been determined through predictive ML algorithms (ARIMA, SARIMA, LSTM). It helps to derive capacity forecasts and plans using these insights and baseline capacity forecast to analyze changing capacity needs overtime.

Source: cisco.com

Tuesday, 6 June 2023

Understanding Application Aware Routing (AAR) in Cisco SD-WAN

One of the main features used in Cisco SD-WAN is Application Aware Routing (AAR). It is often advertised as an intelligent mechanism that automatically changes the routing path of applications, thanks to its active monitoring of WAN circuits to detect anomalies and brownout conditions.


Customers and engineers alike love to wield the power to steer the application traffic away from unhealthy circuits and broken paths. However, many may overlook the complex processes that work in the background to provide such a flexible instrument.

In this blog, we will discuss the nuts and bolts that make the promises of AAR a reality and the conditions that must be met for it to work effectively.

Setting the stage


To understand what AAR can and cannot do, it’s important to understand how it works and the underlying mechanisms running in unison to deliver its promises.

To begin, let’s first define what AAR entails and its accomplices:

Application Aware Routing (AAR) allows the solution to recognize applications and/or traffic flows and set preferred paths throughout the network to serve them appropriately according to their application requirements. AAR relies on Bidirectional Forwarding Detection (BFD) probes to track data path characteristics and liveliness so that data plane tunnels between Cisco SD-WAN edge devices can be established, monitored, and their statistics logged. It uses the collected information to determine the optimal paths through which data plane traffic is sent inside IPsec tunnels. These characteristics encompass packet loss, latency, and jitter.

The information above describes the relationship between AAR and BFD, but it’s crucial to note that they are separate mechanisms. AAR relies on the BFD daemon by polling its results to determine the preferred path configured,  based on the results of the BFD probes sent through each data plane tunnel.

It is a logical next step to explain how BFD works in SD-WAN as described in the Cisco SD-WAN Design Guide:

On Cisco WAN Edge routers, BFD is automatically started between peers and cannot be disabled. It runs between all WAN Edge routers in the topology encapsulated in the IPsec tunnels and across all transports. BFD operates in echo mode, which means when BFD packets are sent by a WAN Edge router, the receiving WAN Edge router returns them without processing them. Its purpose is to detect path liveliness and it can also perform quality measurements for application aware routing, like loss, latency, and jitter. BFD is used to detect both black-out and brown-out scenarios.

Searching for ‘the why’


Understanding the mechanism behind AAR is essential to comprehend its creation and purpose. Why are these measurements taken, and what do we hope to achieve from them? As Uncle Ben once said to Spider-Man, “With great power comes great responsibility.”

Abstraction power and transport independence require significant control and management. Every tunnel built requires a reliable underlay, making your overlay only as good as the underlay it uses.

Service Level Agreements (SLAs) are crucial for ensuring your underlay stays healthy and peachy, and your contracted services (circuits) are performing as expected. While SLAs are a legal agreement, they may not always be effective in ensuring providers fulfill their part of the bargain. In the end, it boils down to what you can demonstrate to ensure that providers keep their i’s dotted and their t’s crossed.

In SD-WAN, you can configure SLAs within the AAR policies to match your application’s requirements or your providers’ agreements.

Remember the averaged calculations I mentioned before? They will be compared against configured thresholds (SLAs) in the AAR policy. Anything not satisfying those SLAs will be flagged, logged, and won’t be used for AAR path selections.

Measure, measure, measure!


Having covered the what, who, and the often-overlooked why, it’s time to turn our attention to the how! ?

As noted previously, BFD measures link liveliness and quality. In other words, collecting, registering, and logging the resulting data. Once logged, the next step is to normalize and compare the data by subsequently averaging the measurements.

Now, how does SD-WAN calculate these average values? By default, quality measurements are collected and represented in buckets. Those buckets are then averaged over time. The default values consist of 6 buckets, also called poll intervals, with  each bucket being 10 minutes long, and each hello sent at 1000 msec intervals.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Putting it all together (by default):

◉ 6 buckets
◉ Each bucket is 10 minutes long
◉ One hello per second, or 1000 msec intervals
◉ 600 hellos are sent per bucket
◉ The average calculation is based on all buckets

Finding the sweet spot


It’s important to remember that these calculations are meant to be compared against the configured SLAs. As the result is a moving average, voltage drops or outages may not be considered by AAR immediately (but they might already be flagged by BFD). It takes around 3 poll intervals to motivate the removal of a certain transport locator (TLOC) from the AAR calculation, when using default values.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Can these values be tweaked for faster AAR decision making? Yes, but it will be a trade-off between stability and responsiveness. Modifying the buckets, multipliers (numbers of BFD hello packets), and frequency may be too aggressive for some circuits to meet their SLAs.

Let’s recall that these calculations are meant to be compared against SLAs configured.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Phew, who would have thought that magic can be so mathematically pleasing? ?

Source: cisco.com

Saturday, 3 June 2023

The Future of Work is Here – and it’s Hybrid

We are excited to be announcing a new blog channel for Cisco – we don’t do this often but believed it was necessary to have a space to tell stories that cut across people, technology, and spaces in one place. In this “Future of Work” channel we’ll be highlighting trends, solutions, and any relevant and interesting topics with a goal of making your journey to great work experiences faster, easier, and more rewarding.

Next week will be our Cisco Live USA event, starting on June 4, 2023 . Whether you are attending in-person in Las Vegas or digitally,we’ll talk about the Future of Work in various sessions and showcase technology solutions live, both in our partner areas and the Cisco Solution Showcase.

One of the big questions we wondered about last year at Cisco Live was the extent to which “hybrid work” – as in the flexibility to work remotely or in the office – was truly here to stay, or if employees would all come back to the office?

The data indicates that – at least in the USA – that about 30% of work days are being taken at home.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Tutorial and Materials

Based on this we’re confident that there will be remote workers, at least for the foreseeable future. Employers are also now starting to make the connection between great hybrid work experiences, achieving corporate sustainability goals, reducing real estate space needs, and the role technology plays in it all going forward. This means investing in the right security, collaboration tools, and network to ensure that teams are empowered no matter where their members are located.

Source: cisco.com

Thursday, 1 June 2023

Building AI/ML Networks with Cisco Silicon One

It’s evident from the amount of news coverage, articles, blogs, and water cooler stories that artificial intelligence (AI) and machine learning (ML) are changing our society in fundamental ways—and that the industry is evolving quickly to try to keep up with the explosive growth.

Unfortunately, the network that we’ve used in the past for high-performance computing (HPC) cannot scale to meet the demands of AI/ML. As an industry, we must evolve our thinking and build a scalable and sustainable network for AI/ML.

Today, the industry is fragmented between AI/ML networks built around four unique architectures: InfiniBand, Ethernet, telemetry assisted Ethernet, and fully scheduled fabrics.

Each technology has its pros and cons, and various tier 1 web scalers view the trade-offs differently. This is why we see the industry moving in many directions simultaneously to meet the rapid large-scale buildouts occurring now.

This reality is at the heart of the value proposition of Cisco Silicon One.

Customers can deploy Cisco Silicon One to power their AI/ML networks and configure the network to use standard Ethernet, telemetry assisted Ethernet, or fully scheduled fabrics. As workloads evolve, they can continue to evolve their thinking with Cisco Silicon One’s programmable architecture.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 1. Flexibility of Cisco Silicon One

All other silicon architectures on the market lock organizations into a narrow deployment model, forcing customers to make early buying time decisions and limiting their flexibility to evolve. Cisco Silicon One, however, gives customers the flexibility to program their network into various operational modes and provides best-of-breed characteristics in each mode. Because Cisco Silicon One can enable multiple architectures, customers can focus on the reality of the data and then make data-driven decisions according to their own criteria.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 2. AI/ML network solution space

To help understand the relative merits of each of these technologies, it’s important to understand the fundamentals of AI/ML. Like many buzzwords, AI/ML is an oversimplification of many unique technologies, use cases, traffic patterns, and requirements. To simplify the discussion, we’ll focus on two aspects: training clusters and inference clusters.

Training clusters are designed to create a model using known data. These clusters train the model. This is an incredibly complex iterative algorithm that is run across a massive number of GPUs and can run for many months to generate a new model.

Inference clusters, meanwhile, take a trained model to analyze unknown data and infer the answer. Simply put, these clusters infer what the unknown data is with an already trained model. Inference clusters are much smaller computational models. When we interact with OpenAI’s ChatGPT, or Google Bard, we are interacting with the inference models. These models are a result of a very significant training of the model with billions or even trillions of parameters over a long period of time.

In this blog, we’ll focus on training clusters and analyze how the performance of Ethernet, telemetry assisted Ethernet, and fully scheduled fabrics behave.

AI/ML training networks are built as self-contained, massive back-end networks and have significantly different traffic patterns than traditional front-end networks. These back-end networks are used to carry specialized traffic between specialized endpoints. In the past, they were used for storage interconnect, however, with the advent of remote direct memory access (RDMA) and RDMA over Converged Ethernet (RoCE), a significant portion of storage networks are now built over generic Ethernet.

Today, these back-end networks are being used for HPC and massive AI/ML training clusters. As we saw with storage, we are witnessing a migration away from legacy protocols.

The AI/ML training clusters have unique traffic patterns compared to traditional front-end networks. The GPUs can fully saturate high-bandwidth links as they send the results of their computations to their peers in a data transfer known as the all-to-all collective. At the end of this transfer, a barrier operation ensures that all GPUs are up to date. This creates a synchronization event in the network that causes GPUs to be idled, waiting for the slowest path through the network to complete. The job completion time (JCT) measures the performance of the network to ensure all paths are performing well.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 3. AI/ML computational and notification process
 
This traffic is non-blocking and results in synchronous, high-bandwidth, long-lived flows. It is vastly different from the data patterns in the front-end network, which are primarily built out of many asynchronous, small-bandwidth, and short-lived flows, with some larger asynchronous long-lived flows for storage. These differences along with the importance of the JCT mean network performance is critical.

To analyze how these networks perform, we created a model of a small training cluster with 256 GPUs, eight top of rack (TOR) switches, and four spine switches. We then used an all-to-all collective to transfer a 64 MB collective size and vary the number of simultaneous jobs running on the network, as well as the amount of network in the speedup.

The results of the study are dramatic.

Unlike HPC, which was designed for a single job, large AI/ML training clusters are designed to run multiple simultaneous jobs, similarly to what happens in web scale data centers today. As the number of jobs increases, the effects of the load balancing scheme used in the network become more apparent. With 16 jobs running across the 256 GPUs, a fully scheduled fabric results in a 1.9x quicker JCT.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 4. Job completion time for Ethernet versus fully scheduled fabric

Studying the data another way, if we monitor the amount of priority flow control (PFC) sent from the network to the GPU, we see that 5% of the GPUs slow down the remaining 95% of the GPUs. In comparison, a fully scheduled fabric provides fully non-blocking performance, and the network never pauses the GPU.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 5. Network to GPU flow control for Ethernet versus fully scheduled fabric with 1.33x speedup
 
This means that for the same network, you can connect twice as many GPUs for the same size network with fully scheduled fabric. The goal of telemetry assisted Ethernet is to improve the performance of standard Ethernet by signaling congestion and improving load balancing decisions.

As I mentioned earlier, the relative merits of various technologies vary by each customer and are likely not constant over time. I believe Ethernet, or telemetry assisted Ethernet, although lower performance than fully scheduled fabrics, are an incredibly valuable technology and will be deployed widely in AI/ML networks.

So why would customers choose one technology over the other?

Customers who want to enjoy the heavy investment, open standards, and favorable cost-bandwidth dynamics of Ethernet should deploy Ethernet for AI/ML networks. They can improve the performance by investing in telemetry and minimizing network load through careful placement of AI jobs on the infrastructure.

Customers who want to enjoy the full non-blocking performance of an ingress virtual output queue (VOQ), fully scheduled, spray and re-order fabric, resulting in an impressive 1.9x better job completion time, should deploy fully scheduled fabrics for AI/ML networks. Fully scheduled fabrics are also great for customers who want to save cost and power by removing network elements, yet still achieve the same performance as Ethernet, with 2x more compute for the same network.

Cisco Silicon One is uniquely positioned to provide a solution for either of these customers with a converged architecture and industry-leading performance.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Silicon One
Figure 6. Evolve your network with Cisco Silicon One

Source: cisco.com

Tuesday, 30 May 2023

To the Cloud and Beyond―A Comprehensive Model for Enhanced NetOps and User Experience

Cloud computing has become wildly popular among IT organizations for a number of reasons, including its ability to enhance efficiency, security, agility, and cost-effectiveness. But now cloud features and principles have also become the building blocks of something even bigger and more all-encompassing: a unified IT operating model that spans people, devices, networks, applications, and things across the digital infrastructure.

With end-to-end visibility and centralized, cloud-based management, IT can monitor, manage, and control an organization’s entire networking, cloud, and security infrastructure. A unified cloud operating model makes it easier for organizations to pivot as their needs change. Organizations can quickly deploy innovative applications, respond to disruptions and threats, and scale performance and capacity. The model is an antidote to separate, complex, operational silos on-premises, on the internet, and in the cloud. The overall goal of the model is to dramatically improve the efficiency, reliability, and resiliency of IT operations, as well as the quality of user experience.

The Need for a Comprehensive Operating Model 


Recent research conducted by IDC has found IT staff worldwide engaged in a struggle with highly specialized, complex, and manual management tools and procedures in use across on-premises, internet, cloud, and security silos. Between all of the silos are management and security gaps. Integration is limited. Efficiency and time-to-market suffer.

Meanwhile, IT is being asked to innovate in the use of applications and data intelligence, to create great and secure user experiences, to scale up or down in response to demand, and to do it all efficiently and cost-effectively.

Enter the cloud operating model.

With the cloud operating model, cloud principles like anywhere access, self-service dashboards, policy automation, end-to-end visibility, microservices, continuous integration, and continuous delivery (CI/CD), and extensibility can be applied across the entire digital infrastructure from access to internet to cloud (Figure 1). That includes all endpoints and systems whether they are on-premises, in the cloud, in remote offices, or mobile.

Cisco Career, Cisco Certification, Cisco Learning, Cisco Tutorial and Materials, Cisco Certification Exam, Cisco Career, Cisco Skill
Figure 1. The Cloud Operating Model

With consistent policies and governance within and across operational domains, the cloud operating model can improve cross-functional collaboration, eliminating disparate processes and disjointed efforts that hamper better business outcomes.

An Ongoing Journey


Achieving a cloud operating model is a journey for organizations requiring a significant shift in how they approach their IT operations:

  • A shift in thinking from viewing cloud and on-premises environments as separate entities to looking at how the best features of both can converge
  • A cultural shift that embraces breaking down silos, promoting collaboration, and encouraging cross-functional innovation
  • New skills, tools, and processes to manage infrastructure, such as automation, DevOps, and agile methodologies
  • Integration of cloud management platforms with legacy systems, which requires careful assessment and a migration strategy

Achieving a cloud operating model is not a one-time event but rather an ongoing journey of continuous improvement across the entire IT environment. Cloud features and a unified management platform provide the means to monitor, optimize, and innovate to help ensure that organizations are getting the most value from their investments.

Where to Begin?


Start by evaluating which cloud principles exist in which domains. At Cisco, we’re developing a new tool that helps organizations define their various infrastructure principles within the access network, software-defined WAN (SD-WAN), and data center. By overlaying principles on infrastructures, an organization can identify opportunities to integrate silos to help meet business and operational objectives.

Some organizations are starting the journey to the cloud operating model by extending SD-WAN connectivity across multiple clouds for simpler IT management and a better application experience. With a distributed SD-WAN, they can apply policy, visibility, control, and zero trust consistently across all clouds, software-as-a-service (SaaS), and middle-mile providers. Other organizations are planning to use this SD-WAN foundation to transition to a secure access service edge (SASE) architecture to connect network and security domains across branches and remote clients.

With our broad cloud and networking platform portfolio, Cisco provides a comprehensive set of solutions with the visibility, consistent policy governance, and insights-driven automation necessary to support an effective cloud operating model. For example, in campus networking, the Cisco Meraki platform supports many key cloud principles.

The Meraki dashboard provides cloud-based management for connected access points and IoT devices, plus monitoring and management of switches. Through the dashboard, configuration and access policies can be defined and automated throughout the network. The dashboard interface is a visual representation of all connected devices, showing the real-time status of each device. And Meraki has a marketplace of partner applications that leverage APIs to extend these capabilities across the network.

Source: cisco.com

Saturday, 27 May 2023

Secure Workload 3.8 delivers more productivity, lower TCO, and faster time to value

If the network is the skeleton, then applications are the lifeblood of today’s business. They connect organizations with their customers, employees, supply chains, partners, and even generate revenue. Safeguarding modern applications running across complex hybrid multicloud environments is paramount for every size and type of business. Cisco Secure Workload 3.8 brings a wealth of new capabilities to help security professionals protect critical application workloads by simplifying complexity and unifying network controls to improve their overall security posture.

Secure Workload protects the applications that power business in an infrastructure, location, and form factor agnostic manner. It does this by employing an agent and agentless approaches to give customers deep visibility into every workload interaction and uses powerful AI/ML-driven automation to handle tasks that are beyond human scale, delivering accurate and consistent microsegmentation and continuously monitoring compliance across the application landscape.

Secure Workload 3.8 improves productivity, lowers the overall TCO for existing and new customers by delivering a minimum of 50% greater scale on the existing platforms. It also provides consistent agent and agentless microsegmentation for workload deployment across hybrid multicloud, expanding the core microsegmentation use cases. And last but certainly not least it strengthens the integration with Cisco Secure Firewall and Cisco Identity Services Engine (ISE).

The latest version of Secure Workload continues to focus on enhancing the user experience with more automation, simplifying the onboarding process, and adding new platform APIs. The end-to-end workflow helps customers achieve zero trust faster, improve operator productivity by 50% or more, and significantly reduce the time to value.

In the digital economy, speed and information are the key currencies. The persona-based reporting dashboard introduced in the 3.8 release delivers actionable intelligence on demand, reports the overall security health of applications, illustrates emerging trends based on historical data, and introduces new Forensics alerts based on the MITRE ATT&CK framework.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Tutorial and Materials
Figure – 1 Alerts based on MITRE ATT&CK Framework

Secure Workload 3.8 supports greater scale on the existing platform and connectors which helps customers extend their hardware life and lowers the solution’s total cost of ownership. The updated scale numbers are shown below:

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Tutorial and Materials

And in our continued effort to deliver a true multicloud zero trust solution with agent and agentless flexibility for our customers, the Google Cloud Platform (GCP) connector in the 3.8 release delivers agentless microsegmentation for applications deployed in Google Cloud, adding to our existing support for AWS and Azure cloud.

Since the early days of its inception, Secure Workload has integrated with Identity Services Engine (ISE) to address user/device identity-based segmentation scenarios in the customer environment. In the 3.8 release, Secure Workload integrates with the ISE Passive ID which helps ingest user, group, and other attributes from the customer Active Directory and/or Azure Directory and in turn enables endpoint identity-aware visibility and policy enforcement.

Two years ago, we launched the native integration between Secure Workload and Secure Firewall to provide defense in depth and an agentless approach for on-prem environments to achieve zero trust. With the 3.8 release, we are taking this integration to the next level with topology awareness, giving customers the granular control to discover, enforce, and automate select policies on a specific firewall or set of firewalls through Secure Workload.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco Tutorial and Materials
Figure 2 – Defense in Depth and Virtual Patching – Secure Workload and Secure Firewall Integration

As you may know, it is not uncommon to have critical vulnerabilities unpatched in an environment despite best-of-breed vulnerability management solutions. While there are many factors, with virtual patching Secure Workload and Secure Firewall seamlessly share threat intelligence and ensure that the right SNORT signature is applied to protect against a known vulnerability present in the customer environment without breaking the application.

Source: cisco.com

Thursday, 25 May 2023

Deliver the Experience Your Customers Want with a Data-Informed Hybrid Work Strategy

Introduction: The Evolution of Hybrid Work


While hybrid work is not a new concept, it continues to evolve as we gather more information about employee work preferences and customers’ workplace and real estate goals. At Cisco, we leverage the insights gained from our own hybrid work environments, including our offices and collaboration centers, to shape our approach. In this blog, we will focus on two of our Hybrid Work building pillars—supporting hybrid work and digitizing real estate—and how data plays a vital role in optimizing workspaces. The remaining pillars, corporate sustainability and employee health and wellness, will be covered in our next blog.

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certifications Exam
The four pillars and corresponding data points for our latest Hybrid work office transformation in Atlanta

Support Hybrid Work: Designing Spaces that Matter


Even as employees return to the office post-pandemic, companies are reconfiguring their real estate portfolios to align with their return-to-work strategies. Cisco has been proactive in reimagining the workplace by investing in collaborative hubs and, in some cases, downsizing office spaces to prioritize other areas.

Toronto & Chicago


Our hybrid work journey began with the design of an innovation center in Toronto, aimed at inspiring innovation and investment in Digital Transformation. This office, although constructed before the pandemic, marks some of our first steps in creating smart and collaborative workspaces. By leveraging a single converged network powered by Power over Ethernet (PoE) technology, we connected people, processes, and data, enhancing productivity and optimizing energy usage. Over time, we made adjustments to the space based on insights gathered, including downsizing the overall floorplate. Today, the Toronto office serves as a studio to educate people on the hybrid work journey.

In Chicago, our office layout was designed to accommodate a pre-pandemic work environment, with employees spending more time in the office. The majority of the layout was dedicated to “me” spaces (70%), such as personal offices and individual workstations, while the remaining portion focused on “we” spaces (30%) to foster collaboration. This office serves as a showcase for Cisco’s technology and collaboration devices, furthering our hybrid work journey.

Penn1 & Atlanta


Throughout each these projects and our work -from-home experiment during the pandemic, we collected significant data to inform our future space designs. A 2021 Cisco Employee Survey revealed that 81% of our employees preferred to work outside the office for 3 to 5 days per week, with office visits primarily driven by learning and collaboration needs.

Armed with these valuable insights, we incorporated these findings into our subsequent major office transformations, including PENN1 in NYC and our latest office in Atlanta. In these locations, we purposefully designed specific spaces such as hot desks, huddle rooms, and collaboration areas to align with the identified trends and utilization patterns of our offices. Each of these offices were meticulously configured with layouts and technology to enhance productivity and collect valuable data for ongoing support of hybrid work. At both, Penn1 and Atlanta, our commitment to hybrid work is evident in the deliberate reduction of space to meet workforce capacity, utilization of collaboration technology, video-enabled environments, prioritization of collaboration spaces over individual workstations, and the elimination of personal offices.

The PENN1 office follows a 30% “me” space to 70% “we” space ratio, which is the opposite of what our pre-pandemic offices were designed for. With the development of our latest office in Atlanta, the ratio of “me” to “we” spaces is now 10% to 90%. This clear shift reflects our data and observations on how people now utilize office space. Hybrid work means going to the office to collaborate with others, rather than focusing on individual tasks. The choices we made in our office designs demonstrate our dedication to creating environments where teams can thrive, engage in meaningful learning, collaborate seamlessly, and drive innovation.

Digitize Cisco Real Estate: Leveraging Data for Smarter Workspaces


Building on the previous section, we now explore how data and analytics have shaped Cisco’s decision-making process in creating workspaces that promote collaboration, learning, and productivity. By harnessing insights from various sources, we have been able to optimize the physical environment to align with employee needs and preferences. Here are a few examples of how data and analytics have guided our hybrid work approach.

Cisco Video End Points


Analyzing data from Cisco Video End Points revealed that the average meeting size consisted of 3.7 in-person participants and 2.5 remote participants. Armed with this information, we intentionally designed small collaboration rooms, quiet spaces, and huddle rooms that cater to smaller groups. This approach fosters frequent and effective collaboration within a more intimate setting.

Cisco Spaces


Cisco Spaces is the core of our Hybrid Work strategy, acting as the foundation that brings everything together. It transforms flat floor plans into dynamic and interactive 3D maps, helping us to better understand and strategize for space utilization. We continuously collect data around the clock from our collaboration devices, networking devices, and third-party devices, enabling us to gain valuable insights in real time. This data empowers us to make informed decisions across the many areas of space design, safety, sustainability, and employee health and wellness. Whether it’s reimagining the layout of a room or helping facilities know what rooms to clean based on occupancy, this data provides us with the insights drive an impactful and informed hybrid work practice.

Cisco Certification, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Preparation, Cisco Tutorial and Materials, Cisco Certifications Exam

Room Design


By utilizing data from Cisco Spaces, we identified a popular room layout at Penn1—a semi-open space with a high-top bar, low-profile seating, and limited power outlets. This configuration unexpectedly became a preferred spot for employees, encouraging focused collaboration and reducing distractions. Analyzing room utilization data through Cisco Spaces allowed us to recognize the demand for such spaces, leading us to implement similar designs in our latest office in Atlanta.

Employee Data


Industry standards revealed that employees typically visit the office to learn, collaborate, and socialize. This knowledge greatly influenced our space design, resulting in dedicated areas at Penn1 and Atlanta that cater to these functions. From large, connected rooms for training and workshops to smaller breakout rooms for collaboration, our spaces are purposefully designed to support these activities based on employee data.

By leveraging the power of data and analytics, Cisco has made informed decisions about space utilization, room design, and meeting configurations. This optimization not only enhances the physical environment but also ensures that our spaces align with the needs and preferences of our employees, facilitating a seamless transition into the hybrid work model.

You Can Be Next: Translating Data into Action


Now that we have discussed how data informs our approach to hybrid work, you may be wondering how to begin implementing your data-driven strategy. At Cisco, we recognize that smart buildings and hybrid work environments require a network of trusted partners to design, build, implement, and iterate — partners just like you. To get started on developing a data-first approach to hybrid work, consider the following options as your next steps:

1. Schedule a tour: Visit our new offices in Atlanta or PENN1 in NYC to witness firsthand the data-driven approach we have adopted and how it can benefit your customers. If you are an existing partner, reach out to your Product Marketing Manager (PMM) today.

2. Virtual tour: If an in-person visit is not possible, take a virtual tour of PENN1 to preview the capabilities of our smart building approach.

Together, let’s leverage data to create actionable insights and shape the future of hybrid work. By creating workspaces that employees want to return to, buildings that provide the necessary data to adapt to changing needs, and environments that foster sustainability and innovation, we can deliver the experience that both customers and employees desire.

Remember, the future of work is hybrid, and data is the key to unlocking its full potential. Together, we can create workspaces that inspire productivity, collaboration, and success. Let’s embrace the power of data and embark on this exciting journey toward a more connected, smart, and flexible work environment.

Source: cisco.com