Tuesday, 6 February 2024

Safeguard Your Network in a Post-Quantum World

Safeguard Your Network in a Post-Quantum World

Security is critical when transmitting information over any untrusted medium, particularly with the internet. Cryptography is typically used to protect information over a public channel between two entities. However, there is an imminent threat to existing cryptography with the advent of quantum computers. According to the National Institute of Standards and Technology (NIST), “When quantum computers are a reality, our current public key cryptography won’t work anymore… So, we need to start designing now what those replacements will be.”

Quantum computing threat


A quantum computer works with qubits, which can exist in multiple states simultaneously, based on the quantum mechanical principle of superposition. Thus, a quantum computer could explore many possible permutations and combinations for a computational task, simultaneously and swiftly, transcending the limits of classical computing.

While a sufficiently large and commercially feasible quantum computer has yet to be built, there have been massive investments in quantum computing from many corporations, governments, and universities. Quantum computers will empower compelling innovations in areas such as AI/ML and financial and climate modeling. Quantum computers, however, will also give bad actors the ability to break current cryptography.

Public-key cryptography is ubiquitous in modern information security applications such as IPsec, MACsec, and digital signatures. The current public-key cryptography algorithms are based on mathematical problems, such as the factorization of large numbers, which are daunting for classical computers to solve. Shor’s algorithm provides a way for quantum computers to solve these mathematical problems much faster than classical computers. Once a sufficiently large quantum computer is built, existing public-key cryptography (such as RSA, Diffie-Hellman, ECC, and others) will no longer be secure, which will render most current uses of cryptography vulnerable to attacks.

Store now, break later


Why worry now? Most of the transport security protocols like IPsec and MACsec use public-key cryptography during the authentication/key establishment phase to derive the session key. This shared session key is then used for symmetric encryption and decryption of the actual traffic.

Bad actors can use the “harvest now, decrypt later” approach to capture encrypted data right now and decrypt it later, when a capable quantum computer materializes. It is an unacceptable risk to leave sensitive encrypted data susceptible to impending quantum threats. In particular, if there is a need to maintain forward secrecy of the communication beyond a decade, we must act now to make these transport security protocols quantum-safe.

The long-term solution is to adopt post-quantum cryptography (PQC) algorithms to replace the current algorithms that are susceptible to quantum computers. NIST has identified some candidate algorithms for standardization. Once the algorithms are finalized, they must be implemented by the vendors to start the migration. While actively working to provide PQC-based solutions, Cisco already has quantum-safe cryptography solutions that can be deployed now to safeguard the transport security protocols.

Cisco’s solution


Cisco has introduced the Cisco session key import protocol (SKIP), which enables a Cisco router to securely import a post-quantum pre-shared key (PPK) from an external key source such as a quantum key distribution (QKD) device or other source of key material.

Safeguard Your Network in a Post-Quantum World
Figure 1. External QKD as key source using Cisco SKIP

For deployments that can use an external hardware-based key source, SKIP can be used to derive the session keys on both the routers establishing the MACsec connection (see Figure 1).

With this solution, Cisco offers many benefits to customers, including:

  • Secure, lightweight protocol that is part of the network operating system (NOS) and does not require customers to run any additional applications
  • Support for “bring your own key” (BYOK) model, enabling customers to integrate their key sources with Cisco routers
  • The channel between the router and key source used by SKIP is also quantum-safe, as it uses TLS 1.2 with DHE-PSK cipher suite
  • Validated with several key-provider partners and end customers

Safeguard Your Network in a Post-Quantum World
Figure 2. Cisco SKS engine as the key source

In addition to SKIP, Cisco has introduced the session key device (SKS), which is a unique solution that enables routers to derive session keys without having to use an external key source.

Safeguard Your Network in a Post-Quantum World
Figure 3. Traditional session key distribution

The SKS engine is part of the Cisco IOS XR operating system (see Figure 2). Routers establishing a secure connection like MACsec will derive the session keys directly from their respective SKS engines. The engines are seeded with a one-time, out-of-band operation to make sure they derive the same session keys.

Unlike the traditional method (see Figure 3), where the session keys are exchanged on the wire, only the key identifiers are sent on the wire with quantum key distribution. So, any attacker tapping the links will not be able to derive the session keys, as having just the key identifier is not sufficient (see Figure 4).

Safeguard Your Network in a Post-Quantum World
Figure 4. Quantum session key distribution

Cisco is leading the way with comprehensive and innovative quantum-safe cryptography solutions that are ready to deploy today.

Source: cisco.com

Saturday, 3 February 2024

Redefining the IT war room with end-to-end observability

Redefining the IT war room with end-to-end observability

Transforming the war room starts with Customer Digital Experience Monitoring (CDEM) to break down silos with correlated, cross-domain insights and efficiency for rapid resolutions.

Time is money and commandeering a lot of time from many of the smartest and most expensive people across your organization, often at short notice, can be unthinkably expensive.

There’s the hourly cost of their time. Plus, the cost of lost opportunities related to the work they’re doing, which is now delayed. That’s far from the full story though. The costs extend far beyond their own input as everybody needs time to speak, listen, consider, and work through the possibilities.

And yet, when a new software release rolls around, that’s exactly how many organizations respond. They can’t be sure what might go wrong with a software release, so they make sure all the right people are available, just in case.

When it’s obvious that something is going wrong in the application runtime environment, or a mission-critical application starts to experience performance problems, and it needs to be fixed immediately, that same wide group is gathered to figure out the problem and determine the best way to fix it.

Meanwhile, reputational damage to the company is growing with every minute of disruption, and the financial clock is ticking with each minute spent identifying and remediating issues while customers and end users have limited or no access to the applications that make modern business work.

The war room is a blunt instrument that casts a wide net 


Convening an IT war room is born of a lack of visibility. The team must leverage their collective expertise to determine the likely root cause of a performance-impacting issue, because it’s typically not obvious to anyone at the outset exactly where the problem lies.

The time required to pinpoint the issue can be significant, even when the war room is filled with skilled, intelligent subject matter experts. That’s because modern applications are built on cloud-native architectures and can be accessed from anywhere using different devices. They leverage packaged code and dependencies deployed as microservices to increase developer speed and flexibility.

That includes containers, third-party libraries, and application programming interfaces (APIs) which create a complicated environment in which updates, changes, and conflicts between dependencies need to be constantly managed to ensure applications run optimally. If the application slows down, doesn’t work as it should, or crashes, the result is poor user experience and even lost business.

Application dependencies can also affect the security of an application. This is particularly true when an application depends on third-party code or libraries which could contain vulnerabilities which offer an attack path. That puts not only the application, but also user data, at risk.

For example, misconfiguration and even ransomware or distributed denial-of-service (DDoS) attacks can all present confusingly similar symptoms as network packet loss in terms of performance degradation, with no clear indication of the root cause.

Consider the scenario of a large supermarket at the height of holiday season shopping. Products are flying off the shelves and need frequent restocking throughout the day. It’s critical to know inventory availability right up to the minute, so shelves remain full. Inaccurate inventory or running out of stock undermines trust the business has worked hard to build, not to mention lost sales.

At that point, the hand scanners used for inventory start to falter. They’re not reliably scanning, which means the movement of products from the stock room onto the shelves isn’t being recorded accurately. The team can no longer be sure what’s on the shelves, what’s left in the stockroom, what needs to be reordered and when it needs to arrive.

A call is made to the IT team and a war room is convened to investigate what’s causing the problem. The Wi-Fi network is an obvious culprit, however as time passes, the networking team can’t find any Wi-Fi problems. Eventually, they realize it’s the scanner firmware. The scanners themselves need to be replaced, and once they are, normal service is resumed.

Customer Digital Experience Monitoring (CDEM) changes everything  


This story is one of many that illustrate the shortcomings of infrastructure monitoring which lacks visibility into the digital experience.

In this example, the war room participants must sequentially sort through all the different scanner dependencies according to their collective experience to spot the most likely culprit, in the least amount of time. The effort involves cross-functional teams, who each investigate their area of responsibility, so there’s a similar level of effort and time required from everyone. The result is that most teams can typically prove their “innocence” — that is, they can show that their area of responsibility does or does not harbor the root cause.

In effect, because they lack clear insight, each team spends a huge amount of expensive time looking for an issue that isn’t theirs to find. There’s a better way. Cisco Full-Stack Observability allows operational teams to completely change their troubleshooting perspective.

Customer Digital Experience Monitoring (CDEM), a capability of Cisco Full-Stack Observability (FSO) solutions, allows teams to track the user journey itself starting with the device and traversing every touchpoint including dependencies like APIs and microservices.

Had they used CDEM, the teams in our example would have seen the user journey failing at the first step. Eliminating their theoretical most likely culprit – the Wi-Fi network – would have taken just moments instead of hours, and attention would have immediately focused on the scanners themselves.

It’s easy to see how observability at this level fundamentally changes the IT war room, and dramatically accelerates mean time to resolution (MTTR) through bypassing many of the steps that teams would otherwise have to take.

Answers lie in observable telemetry data


War rooms are complicated by multiple different data sets surfaced by separate monitoring tools. For example, Network Ops looks at data from the network, DevSecOps looks at data from the application and third-party dependencies.

Achieving a complete view of all relevant application data from normal business operations is a massive task. Worse yet, it’s impossible to correlate these endless streams of incoming data within a workable timeframe using disparate tools and systems that were never designed for the job. That makes spotting anomalies across the full stack, let alone prioritizing and acting on them, virtually impossible in a reasonable timeframe.

Cisco Full-Stack Observability solutions democratize data access, breaking down cross-functional silos and bringing teams together to collaborate on the next best step for resolving problems. Customer Digital Experience Monitoring combines Cisco’s application observability capabilities with industry-leading network intelligence, allowing IT teams to quickly identify the root cause of issues before they hurt the overall performance of the application, affect the end user and ultimately the business.

Cisco’s solution provides insights into both the application and the network, with internet connectivity metrics for application operations and real-time application dependency mapping for network operations. This combined application and network view significantly reduces MTTR with actionable recommendations that help teams prioritize remediation activities based on business impact and criticality.

For instance, teams can see at which point along the user’s path performance degradation is occurring, or communication is failing altogether. Vitally, they have contextual visibility that helps them collaboratively identify, triage, and resolve issues because they’re all working from the same data sourced from every possible touchpoint, including the network, which is an area often missing from other solutions.

The result is the end of war rooms as we know them. Instead, teams have end-to-end visibility, correlated insights, and recommended actions all tied to business context, across applications, security, the network, and the internet. Only Cisco combines the vantage points of applications, networking, and security at scale to power true observability over the entire IT estate.

Source: cisco.com

Thursday, 1 February 2024

Reimagine Your Data Center for Responsible AI Deployments

Reimagine Your Data Center for Responsible AI Deployments

Most days of the week, you can expect to see AI- and/or sustainability-related headlines in every major technology outlet. But finding a solution that is future ready with capacity, scale and flexibility needed for generative AI requirements and with sustainability in mind, well that’s scarce.

Cisco is evaluating the intersection of just that – sustainability and technology – to create a more sustainable AI infrastructure that addresses the implications of what generative AI will do to the amount of compute needed in our future world. Expanding on the challenges and opportunities in today’s AI/ML data center infrastructure, advancements in this area can be at odds with goals related to energy consumption and greenhouse gas (GHG) emissions.

Addressing this challenge entails an examination of multiple factors, including performance, power, cooling, space, and the impact on network infrastructure. There’s a lot to consider. The following list lays out some important issues and opportunities related to AI data center environments designed with sustainability in mind:

1. Performance Challenges: The use of Graphics Processing Units (GPUs) is essential for AI/ML training and inference, but it can pose challenges for data center IT infrastructure from power and cooling perspectives. As AI workloads require increasingly powerful GPUs, data centers often struggle to keep up with the demand for high-performance computing resources. Data center managers and developers, therefore, benefit from strategic deployment of GPUs to optimize their use and energy efficiency.

2. Power Constraints: AI/ML infrastructure is constrained primarily by compute and memory limits. The network plays a crucial role in connecting multiple processing elements, often sharding compute functions across various nodes. This places significant demands on power capacity and efficiency. Meeting stringent latency and throughput requirements while minimizing energy consumption is a complex task requiring innovative solutions.

3. Cooling Dilemma: Cooling is another critical aspect of managing energy consumption in AI/ML implementations. Traditional air-cooling methods can be inadequate in AI/ML data center deployments, and they can also be environmentally burdensome. Liquid cooling solutions offer a more efficient alternative, but they require careful integration into data center infrastructure. Liquid cooling reduces energy consumption as compared to the amount of energy required using forced air cooling of data centers.

4. Space Efficiency: As the demand for AI/ML compute resources continues to grow, there is a need for data center infrastructure that is both high-density and compact in its form factor. Designing with these considerations in mind can improve efficient space utilization and high throughput. Deploying infrastructure that maximizes cross-sectional link utilization across both compute and networking components is a particularly important consideration.

5. Investment Trends: Looking at broader industry trends, research from IDC predicts substantial growth in spending on AI software, hardware, and services. The projection indicates that this spending will reach $300 billion in 2026, a considerable increase from a projected $154 billion for the current year. This surge in AI investments has direct implications for data center operations, particularly in terms of accommodating the increased computational demands and aligning with ESG goals.

6. Network Implications: Ethernet is currently the dominant underpinning for AI for the majority of use cases that require cost economics, scale and ease of support. According to the Dell’Oro Group, by 2027, as much as 20% of all data center switch ports will be allocated to AI servers. This highlights the growing significance of AI workloads in data center networking. Furthermore, the challenge of integrating small form factor GPUs into data center infrastructure is a noteworthy concern from both a power and cooling perspective. It may require substantial modifications, such as the adoption of liquid cooling solutions and adjustments to power capacity.

7. Adopter Strategies: Early adopters of next-gen AI technologies have recognized that accommodating high-density AI workloads often necessitates the use of multisite or micro data centers. These smaller-scale data centers are designed to handle the intensive computational demands of AI applications. However, this approach places additional pressure on the network infrastructure, which must be high-performing and resilient to support the distributed nature of these data center deployments.

As a leader in designing and supplying the infrastructure for internet connectivity that carries the world’s internet traffic, Cisco is focused on accelerating the growth of AI and ML in data centers with efficient energy consumption, cooling, performance, and space efficiency in mind.

These challenges are intertwined with the growing investments in AI technologies and the implications for data center operations. Addressing sustainability goals while delivering the necessary computational capabilities for AI workloads requires innovative solutions, such as liquid cooling, and a strategic approach to network infrastructure.

The new Cisco AI Readiness Index shows that 97% of companies say the urgency to deploy AI-powered technologies has increased. To address the near-term demands, innovative solutions must address key themes — density, power, cooling, networking, compute, and acceleration/offload challenges.

We want to start a conversation with you about the development of resilient and more sustainable AI-centric data center environments – wherever you are on your sustainability journey. What are your biggest concerns and challenges for readiness to improve sustainability for AI data center solutions?

Source: cisco.com

Tuesday, 30 January 2024

How Life-Cycle Services Can Help Drive Business Outcomes

How Life-Cycle Services Can Help Drive Business Outcomes

For most organizations, the journey to a digital-first business is not yet complete. While many have implemented new technologies to enable digital capabilities across the business, modernizing IT infrastructure and applications requires ongoing planning and investment. In fact, a recent IDC survey found that 49% of respondents identified their organization as only “somewhat digital,” with many in the process of transforming portions of the business to digital. With so much transformation still required, many CIOs and IT managers are prioritizing projects that will help drive new digital-first business models.

Unfortunately, while technology innovations promise to deliver significant results for business managers, the reality of implementation and adoption is often very different. CIOs and IT managers are increasingly tasked with not just deploying and integrating these complex solutions, but with delivering specific, measurable business outcomes to key stakeholders across the organization. IDC surveys show that most organizations continue to prioritize strategies focused on improved customer and employee experiences, better operational efficiencies, achieving sustainability goals, and expanding products into new markets. Delivering critical insights to business managers to enable real-time data analysis and decision-making is key to driving these strategies. While the specific business outcomes vary by industry and region, they are united by one common thread: they are all driven by technology.

Conversations with CIOs and IT managers reveal that a critical and difficult first step is making sure IT objectives and KPIs can be aligned with measurable, specific business outcomes across the organization. Aligning IT and business strategies has long been a goal, but managing a digital-first business to achieve desired outcomes across the organization has increased its importance. Such alignment is a difficult challenge for IT organizations that often lack the skills and resources for this exercise. Business managers also struggle to understand underlying IT infrastructure, further complicating the process of aligning strategic outcomes across IT and the digital-first business.

To help, services partners are offering comprehensive portfolios of outcomes-driven, life-cycle services designed to help customers align technology, operational, and business outcomes to accelerate value realization. These services are typically featured in packages that include planning and advisory, implementation and deployment, adoption and ongoing optimization, and support and training. IDC believes life-cycle services partners committed to demonstrating the value of technology for a digital business should incorporate the following capabilities:

  • Early emphasis on defining desired technical, operational, and business outcomes with required stakeholders across the organization.
  • Developed methodologies that can help align technology implementations and operational outcomes with business goals by establishing key performance indicators and objective metrics for tracking progress.
  • Highly skilled talent with the right mix of business, technology skills, and certifications on new and emerging technologies across IT and network solutions, with continuous engagement throughout the life cycle.
  • Ongoing monitoring and reporting through dashboards that clearly demonstrate how the IT organization is leveraging technology to meet the needs of business managers.
  • Extensive technology-driven capabilities that can help meet key risk management objectives, both as part of technology implementations and ongoing operations.

In addition, CIOs should ensure that services partners can demonstrate an integrated approach to identifying, measuring, and monitoring key technology, operational, and business KPIs throughout the life cycle. While most organizations focus on implementation and onboarding, the value of most technology solutions is delivered well after the initial project is complete. Life-cycle services partners should be able to identify and track key objectives that demonstrate ongoing adoption and optimization to ensure organizations are realizing the full value of technology solutions.

Not surprisingly, IDC research shows that organizations are seeing a number of benefits by using life-cycle services partners focused on achieving customer success. Respondents in a recent IDC survey highlighted the following:

  • 40% reported improving the overall performance of the solution.
  • 40% were able to deliver more value to business managers.
  • 38% indicated they adopted new implementations faster.
  • 36% reported expanding adoption to improve business results.

For CIOs looking to transform the IT organization from a cost center to an “innovation driver” across the business, these benefits are critical to realizing the promise of complex technology solutions. Life-cycle services partners with proven processes and methodologies connecting technology, operational, and business outcomes can help resource-strapped IT organizations demonstrate the full value of technology innovations and drive direct, tangible business results. IDC believes life-cycle services partners who can demonstrate these capabilities are well-positioned to help organizations seeking to drive faster adoption while delivering the desired outcomes across the business.

Source: cisco.com

Saturday, 27 January 2024

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

Understanding how to best serve customers is a primary focus for retailers. However, gaining this understanding can be complex. Retailers need to know what their customers are buying, when they’re buying it, and their feelings while shopping. Stationing staff members in the store to gauge customer reactions is not an efficient solution. This is where Meraki and EVERYANGLE come into play, enhancing the customer-focused daily operations of the Cisco Store.

The MV12 and MV63 are directional cameras. The indoor MV12 offers a choice of a wide or narrow Field of View (FoV) and provides intelligent object and motion detection, analytics, and easy operation via the Meraki dashboard. The outdoor MV63 monitors the entrances and exits of the store.

Meanwhile, the MV32 and MV93 are 360° fish-eye cameras. The indoor MV32 combines an immersive de-warped FoV with intelligent object detection and streamlined operation via the Meraki dashboard, in addition to addressing major security vulnerabilities. The outdoor MV93 offers panoramic wide area coverage, enhancing surveillance capabilities even in low light.

The data from these Meraki cameras is utilized by EVERYANGLE in the Cisco Store in various ways.

Footfall Intelligence and Customer Demographics


A challenge for physical stores is obtaining metrics comparable to online stores, making it difficult to tailor the retail experience effectively. EVERYANGLE’s technology levels the playing field for physical retailers.

EVERYANGLE uses data from the directional cameras MV12 and MV63 to help the Cisco Store better understand its visitors. The Next Generation Footfall App breaks down customer genders and ages, monitors their satisfaction levels post-visit, and tracks the time spent in various store sections. For example, data from a Cisco Live event revealed a 50:50 male to female customer ratio, contrary to the expected 60:40, leading to adjustments in the Store’s product range.

EVERYANGLE determines purchase conversion rates at physical locations by analyzing integrated sales data and foot traffic. Their machine learning and AI algorithms provide 95% accurate customer insights. Staff members are automatically excluded from these insights, ensuring data accuracy. 

EVERYANGLE’s True Customer Identification accurately distinguishes genuine shoppers from non-customers. This empowers retailers with precise customer data, crucial for targeted strategies and store optimization, ensuring decisions reflect real customer activity.

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki
Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

The Cisco Store can thus easily gauge customer demographics, engagement, and group dynamics without a heavy in-store staff presence, adjusting displays and marketing tactics accordingly. Fortunately, we have seen an increase in positive sentiment from when customers enter the Cisco Store to when they exit!

Footfall Intelligence 

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

Customer Demographic Breakdown

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

Queue Counting and Dwell Times


This data is used to maintain smooth store operations and continuously improve performance. The fish-eye cameras MV32 and MV93 are used to monitor the checkout lines: a threshold on the queue count allows for staff adjustment at checkouts as needed. If people spend a comparatively longer time at certain stations, we can begin to understand if that longer dwell time means more sales of those specific products.

Improving Audience Understanding and Store Operations with EVERYANGLE and Meraki

In-Store Security


Meraki’s people detection capabilities, integrated with EVERYANGLE, help the Cisco Store maintain top-notch security. Cameras, integrated with the point of sale (POS) system, anonymously track high-value purchases and returns, aiding in fraud prevention. 

Meraki and EVERYANGLE enable the Cisco Store to better understand its customers and serve them effectively, prioritizing their security and privacy. The analytics and dashboards facilitate customer service improvement, ensuring customers leave with a positive shopping experience.

Source: cisco.com

Thursday, 25 January 2024

Maximizing Operational Efficiency: Introducing our New Smart Agent Management for Cisco AppDynamics

Maximizing Operational Efficiency: Introducing our New Smart Agent Management for Cisco AppDynamics

Application performance monitoring (APM) remains a key pillar of any observability strategy. Overwhelmed IT Infra and Ops teams rely on for the powerful application and business insights they need to deliver flawless digital experiences to their end users. The challenge they face from the scale of an application’s APM deployments can be complex and difficult to maintain — costing teams time that could be better served focusing on business KPIs.

Turn maintenance time to innovation time


Cisco continuously looks at every opportunity to use automation and intelligence to give time back to our customers, with a full commitment to helping our customers reduce the stress and inefficiency caused by the ever-growing complexity of technologists’ IT environments. I’m pleased to share a major innovation in the Cisco Full-Stack Observability portfolio: Smart Agent for Cisco AppDynamics, which enables simplified full-stack application instrumentation and centralized agent lifecycle management.

Simplified agent management – focus on what matters most


An average sized organization may have upward of 40,000 agents in deployment, but I’ve even spoken with some larger organizations with more than one million agents to support massively scalable applications! Keeping all those agents updated to the latest version can be complicated and time consuming and takes away critical manpower from actually managing application performance.

But the business impacts can be even greater. Security risks can occur at any time, and to keep your IT environments safe, it is critical to maintain good agent management and version compliance. Failure to do so can expose teams to unnecessary risks that may have otherwise been resolved in the latest agent releases.

Good agent management also allows you to take advantage of the latest innovations released each month.  New features can provide powerful new insights, but taking advantage of these requires environments to be updated with the latest agents. This isn’t possible unless you have a structured and automated approach to agent management!

Maximizing Operational Efficiency: Introducing our New Smart Agent Management for Cisco AppDynamics
Centralized agent visibility on Cisco AppDynamics

How we made it simple


Cisco is making it easier than ever for customers to manage their agent fleets with the introduction of Smart Agent for Cisco AppDynamics with centralized agent lifecycle management, which allows you to onboard new applications faster, quickly identify out-of-date agents, and easily conduct upgrades. What may have once taken many hours of manual instrumentation now just requires a few minutes and clicks.

Smart Agent is deployed on each host, allowing teams to remotely install and upgrade Cisco AppDynamics agents from a centralized agent management console with just a few clicks. The console flags agents that are old and outdated, and easily allows IT teams to select them and push upgrades without coding or scripts. Users can also install new agents directly from the agent management console when instrumenting new applications. There’s no need for manual intervention —teams can now focus on what matters for the business and react quickly to security events or take advantage of new agent-based functionality.

Maximizing Operational Efficiency: Introducing our New Smart Agent Management for Cisco AppDynamics
Upgrade Cisco AppDynamics agents with just a few clicks.

Our dedication to simplification


Agent lifecycle automation is just the first step in our journey toward simplification for our customers. Soon, Smart Agent will be able to automatically instrument new applications with a single-agent installation utilizing intelligent auto-detect and auto-deploy capabilities, guided by Smart Agent policies, to determine which agents are needed, and then automatically download, install, and configure only those agents needed. Smart Agent will reduce instrumentation time from hours/days to minutes.

Source: cisco.com

Tuesday, 23 January 2024

New M6 based CSW-Cluster Hardware

New M6 based CSW-Cluster Hardware

This blog is about Cisco Secure Workload on premises platform hardware updates. The cluster hardware comprises of UCS servers and Nexus switches which are required to be upgraded with the EOL cycles of UCS servers and Nexus Switches. In this blog we will discuss about the new M6 hardware platform and its benefits.

Secure Workload is one of the security solutions from Cisco that offers micro-segmentation and application security across multi-cloud environments, and it is available as SaaS and on prem flavors. There is complete feature parity between both the solutions, and we see that many customers have chosen On-prem cluster over SaaS offerings due to their own requirements driven by their businesses especially in banking and finance, manufacturing verticals. Let us understand Microsegmentation and secure workload hardware cluster role.

Microsegmentation is being adopted by many enterprises as a preventive tool which is based on zero-trust principle. It helps protect applications and data by preventing lateral movements of bad actors and containing the blast radius during active attack. Deploying zero trust microsegmentation is a very hard task and operation intensive activity. The difficult part is the policy life cycle. The application requirements from the network keep on evolving as you upgrade, patch, or add new features to your applications and without microsegmentation it goes unnoticed because workloads can communicate to each other freely. As a principle of zero trust while deploying microsegmentation you are creating a micro-perimeter around each of these workloads and whitelisting the intended traffic while blocking rest all (Allow list model) then all these evolving changes in network requirement gets blocked unless there is a policy lifecycle mechanism available. Application teams will never be able to provide the exact communication requirements as they keep on changing and hence automatic detection of policies and changes is required.

Secure workload on prem cluster is available in two form factors small (8U) and large (39U) appliances. The reason Cisco has appliance based on-prem solution is for predictability and performance. In many cases vendors provide VM (Virtual Machine) based appliances with required specifications, but the challenge in VM appliances is that underlying hardware may be shared with other applications and may compromise the performance. Also, troubleshooting for performance related issues becomes challenging, especially for applications with AI/ML processing of large datasets. These appliances come with prebuilt racks with stacks of servers and nexus 9k switches which are hardened. Hence, we know the capacity and the number of workloads supported and other performance parameters can be predicted accurately.

The release 3.8 software has optimized the appliances performance and supporting 50-100% greater number of workloads on same hardware. This means the existing customers with M5 appliances now can support almost double the number of workloads in the existing investment of their appliances. The TCO (Total Cost of Ownership) for existing customers reduces with the new workload capacity numbers. The new and old numbers of supported workloads are as below.

New M6 based CSW-Cluster Hardware

All the current appliances are based on Cisco UCS C-220 M5 Gen 2 series. The M5 series server end of sale/life announcement has been published in May 2023 and M5 based Secure workload cluster has been announced EOS/EOL on 17th August 2023 (link). Even though the M5 cluster will have support for another few years, there are certain benefits of upgrading the cluster to M6.

Let us understand how the Micro-segmentation policies are detected and enforced in CSW (Cisco Secure Workload). The network telemetry is collected from all agent-based and agentless workloads in CSW. The AI/ML based Application dependency mapping is run on this dataset to detect the policies and changes to policies. The policies per workload are calculated and then pushed to workloads for enforcement leveraging the native OS firewalling capabilities. This is a huge amount of dataset to be handled for policy detection. The AI/ML tools are always CPU intensive and demand high CPU resources for faster processing. The larger the dataset will take longer processing time and require more CPU horsepower in the cluster to get more granular policies. It also needs a fast lane network within the cluster for communication between the nodes as the application is distributed amongst the cluster nodes. All of these performance related requirements of cluster drive the need to have more CPU resources and faster network connectivity. Though the existing hardware configuration is quite sufficient to handle all these requirements, there are going to be new features and functionalities which will be added in future releases and those may also need additional resources. Hence with the new 3.8 release we are launching the support for the new M6 Gen 3 appliance for both 8U and 39U platform. The processing power is based on the latest Cisco C series Gen3 servers with the latest processors from Intel and newer N9k switches. The new Intel processors are powerful with more cores available per processor, hence the total count of processing GHz for cluster is increased, providing more horsepower for AI/ML-based ADM (Application Dependency Mapping) processing. The overall performance of the cluster will be boosted by the additional cores available in the nodes.

We know that any upgrade of hardware is a difficult IT task. So, to simplify the upgrade task, we have made sure that the migration to M6 from M4/M5 is seamless by qualifying and documenting the complete process step wise in the migration guide. The document also mentions the checks to be carried out before and after migration to confirm that all data has been migrated correctly. All the existing configuration of the cluster with flow data will be backed up using DBR (Data Backup and Restore) functionality and will be restored on the new cluster after migration. This ensures that there is no data loss during the migration. The agents can be configured to re-home automatically to new cluster and reinstallation of agents is not needed.

As we know in security that the MTTD/MTTR must be as fast as possible, and I think that M6 upgrade will bring in faster threat and policy detection and response reducing MTTD/MTTR.

Source: cisco.com