Saturday, 20 April 2024

Cisco Hypershield: Reimagining Security

It is no secret that cybersecurity defenders struggle to keep up with the volume and craftiness of current-day cyber-attacks. A significant reason for the struggle is that security infrastructure has yet to evolve to effectively and efficiently stymie modern attacks. The security infrastructure is either too unwieldy and slow or too destructive. When the security infrastructure is slow and unwieldy, the attackers have likely succeeded by the time the defenders react. When security actions are too drastic, they impair the protected IT systems to such an extent that the actions could be mistaken for the attack itself.

So, what does a defender do? The answer to the defender’s problem is a new security infrastructure — a fabric — that can autonomously create defenses and produce measured responses to detected attacks. Cisco has created such a fabric — Cisco Hypershield — that we discuss in the paragraphs below.

Foundational principles


We start with the foundational principles that guided the creation of Cisco Hypershield. These principles provide the primitives that enable defenders to escape the “damned-if-you-do and damned-if-you-don’t” situation we alluded to above.

Hyper-distributed enforcement

IT infrastructure in a modern enterprise spans privately run data centers (private cloud), public cloud, bring-your-own devices (BYOD) and the Internet of Things (IoT). In such a heterogeneous environment, centralized enforcement is inefficient as traffic must be shuttled to and from the enforcement point. The shuttling creates networking and security design challenges. The answer to this conundrum is the distribution of the enforcement point close to the workload.

Cisco Hypershield comes in multiple enforcement form factors to suit the heterogeneity in any IT environment:

1. Tesseract Security Agent: Here, security software runs on the endpoint server and interacts with the processes and the operating system kernel using the extended Berkeley Packet Filter (eBPF). eBPF is a software framework on modern operating systems that enables programs in user space (in this case, the Tesseract Security Agent) to safely carry out enforcement and monitoring actions via the kernel.
2. Virtual/Container Network Enforcement Point: Here, a software network enforcement point runs inside a virtual machine or container. Such enforcement points are instantiated close to the workload and protect fewer assets than the typical centralized firewall.
3. Server DPUs: Cisco Hypershield’s architecture supports server Data Process Units (DPUs). Thus, in the future, enforcement can be placed on networking hardware close to the workloads by running a hardware-accelerated version of our network enforcement point in these DPUs. The DPUs offload networking and security processing from the server’s main CPU complex in a secure enclave.
4. Smart Switches: Cisco Hypershield’s architecture also supports smart switches. In the future, enforcement will be placed in other Cisco Networking elements, such as top-of-rack smart switches. While not as close to the workload as agents or DPUs, such switches are much closer than a centralized firewall appliance.

Centralized security policy

The usual retort to distributed security enforcement is the nightmare of managing independent security policies per enforcement point. The cure for this problem is the centralization of security policy, which ensures that policy consistency is systematically enforced (see Figure 1).

Cisco Hypershield follows the path of policy centralization. No matter the form factor or location of the enforcement point, the policy being enforced is organized at a central location by Hypershield’s management console. When a new policy is created or an old one is updated, it is “compiled” and intelligently placed on the appropriate enforcement points. Security administrators always have an overview of the deployed policies, no matter the degree of distribution in the enforcement points. Policies are able to follow workloads as they move, for instance, from on-premises to the native public cloud.

Cisco Hypershield: Reimagining Security
Figure 1: Centralized Management for Distributed Enforcement
 
Hitless enforcement point upgrade

The nature of security controls is such that they tend to get outdated quickly. Sometimes, this happens because a new software update has been released. Other times, new applications and business processes force a change in security policy. Traditionally, neither scenario has been accommodated well by enforcement points — both acts can be disruptive to the IT infrastructure and present a business risk that few security administrators want to undertake. A mechanism that makes software and policy updates normal and non-disruptive is called for!

Cisco Hypershield has precisely such a mechanism, called the dual dataplane. This dataplane supports two data paths: a primary (main) and a secondary (shadow). Traffic is replicated between the primary and the secondary. Software updates are first applied to the secondary dataplane, and when fully vetted, the roles of the primary and secondary dataplanes are switched. Similarly, new security policies can be applied first to the secondary dataplane, and when everything looks good, the secondary becomes the primary.

The dual dataplane concept enables security administrators to upgrade enforcement points without fear of business disruption (see Figure 2).

Cisco Hypershield: Reimagining Security
Figure 2: Cisco Hypershield Dual Dataplane 

Complete visibility into workload actions

Complete visibility into a workload’s actions enables the security infrastructure to establish a “fingerprint” for it. Such a fingerprint should include the types of network and file input-output (I/O) that the workload typically performs. When the workload takes an action that falls outside the fingerprint, the security infrastructure should flag it as an anomaly that requires further investigation.

Cisco Hypershield’s Tesseract Security Agent form factor provides complete visibility into a workload’s actions via eBPF, including network packets, file and other system calls and kernel functions. Of course, the agent alerts on anomalous activity when it sees it.

Graduated response to risky workload behavior

Security tools amplify the disruptive capacity of cyber-attacks when they take drastic action on a security alert. Examples of such action include quarantining a workload or the entire application from the network and shutting down the workload or application. For workloads of marginal business importance, drastic action may be fine. However, taking such action for mission-critical applications (for example, a supply chain application for a retailer) often defeats the business rationale for security tools. The disruptive action hurts even more when the security alert turns out to be a false alarm.

Cisco Hypershield in general, and its Tesseract Security Agent in particular, can generate a graduated response. For example, Cisco Hypershield can respond to anomalous traffic with an alert rather than a block when instructed. Similarly, the Tesseract Security Agent can react to a workload, attempting to write to a new file location with a denial rather than shutting down the workload.

Continuous learning from network traffic and workload behavior

Modern-day workloads use services provided by other workloads. These workloads also access many operating system resources such as network and file I/O. Further, applications are composed of multiple workloads. A human security administrator can’t collate all the applications’ activity and establish a baseline. Reestablishing the baseline is even more challenging when new workloads, applications and servers are added to the mix. With this backdrop, manually determining anomalous behavior is impossible. The security infrastructure needs to do this collation and sifting on its own.

Cisco Hypershield has components embedded into each enforcement point that continuously learn the network traffic and workload behavior. The enforcement points periodically aggregate their learning into a centralized repository. Separately, Cisco Hypershield sifts through the centralized repository to establish a baseline for network traffic and workloads’ behavior. Cisco Hypershield also continuously analyzes new data from the enforcement points as the data comes in to determine if recent network traffic and workload behavior is anomalous relative to the baseline.

Autonomous segmentation


Network segmentation has long been a mandated necessity in enterprise networks. Yet, even after decades of investment, many networks remain flat or under-segmented. Cisco Hypershield provides an elegant solution to these problems by combining the primitives mentioned above. The result is a network autonomously segmented under the security administrator’s supervision.

The autonomous segmentation journey proceeds as follows:

  • The security administrator begins with top-level business requirements (such as isolating the production environment from the development environment) to deploy basic guardrail policies.
  • After initial deployment, Cisco Hypershield collects, aggregates, and visualizes network traffic information while running in an “Allow by Default” mode of operation.
  • Once there is sufficient confidence in the functions of the application, we move to “Allow but Alert by Default” and insert the known trusted behaviors of the application as Allow rules above this. The administrator continues to monitor the network traffic information collected by Cisco Hypershield. The monitoring leads to increased familiarity with traffic patterns and the creation of additional common-sense security policies at the administrator’s initiative.
  • Even as the guardrail and common-sense policies are deployed, Cisco Hypershield continues learning the traffic patterns between workloads. As the learning matures, Hypershield makes better (and better) policy recommendations to the administrator.
  • This phased approach allows the administrator to build confidence in the recommendations over time. At the outset, the policies are deployed only to the shadow dataplane. Cisco Hypershield provides performance data on the new policies on the secondary and existing policies on the primary dataplane. If the behavior of the new policies is satisfactory, the administrator moves them in alert-only mode to the primary dataplane. The policies aren’t blocking anything yet, but the administrator can get familiar with the types of flows that would be blocked if they were in blocking mode. Finally, with conviction in the new policies, the administrator turns on blocking mode, progressing towards the enterprise’s segmentation goal.

The administrator’s faith in the security fabric — Cisco Hypershield — deepens after a few successful runs through the segmentation process. Now, the administrator can let the fabric do most of the work, from learning to monitoring to recommendations to deployment. Should there be an adverse business impact, the administrator knows that rollback to a previous set of policies can be accomplished easily via the dual dataplane.

Distributed exploit protection


Patching known vulnerabilities remains an intractable problem given the complex web of events — patch availability, patch compatibility, maintenance windows, testing cycles, and the like — that must transpire to remove the vulnerability. At the same time, new vulnerabilities continue to be discovered at a frenzied pace, and attackers continue to shrink the time between the public release of new vulnerability information and the first exploit. The result is that the attacker’s options towards a successful exploit increase with time.

Cisco Hypershield provides a neat solution to the problem of vulnerability patching. In addition to its built-in vulnerability management capabilities, Hypershield will integrate with Cisco’s and third-party commercial vulnerability management tools. When information on a new vulnerability becomes available, the vulnerability management capability and Hypershield coordinate to check for the vulnerability’s presence in the enterprise’s network.

If an application with a vulnerable workload is found, Cisco Hypershield can protect it from exploits. Cisco Hypershield already has visibility into the affected workload’s interaction with the operating system and the network. At the security administrator’s prompt, Hypershield suggests compensating controls. The controls are a combination of network security policies and operating system restrictions and derive from the learned steady-state behavior of the workload preceding the vulnerability disclosure.

The administrator installs both types of controls in alert-only mode. After a period of testing to build confidence in the controls, the operating system controls are moved to blocking mode. The network controls follow the same trajectory as those in autonomous segmentation. They are first installed on the shadow dataplane, then on the primary dataplane in alert-only mode, and finally converted to blocking mode. At that point, the vulnerable workload is protected from exploits.

During the process described above, the application and the workload continue functioning, and there is no downtime. Of course, the vulnerable workload should eventually be patched if possible. The security fabric enabled by Cisco Hypershield just happens to provide administrators with a robust yet precise tool to fend off exploits, giving the security team time to research and fix the root cause.

Conclusion

In both the examples discussed above, we see Cisco Hypershield function as an effective and efficient security fabric. The innovation powering this fabric is underscored by it launching with several patents pending.

In the case of autonomous segmentation, Hypershield turns flat and under-segmented networks into properly segmented ones. As Hypershield learns more about traffic patterns and security administrators become comfortable with its operations, the segments become tighter, posing more significant hurdles for would-be attackers.

In the case of distributed exploit protection, Hypershield automatically finds and recommends compensating controls. It also provides a smooth and low-risk path to deploying these controls. With the compensating controls in place, the attacker’s window of opportunity between the vulnerability’s disclosure and the software patching effort disappears.

Source: cisco.com

Thursday, 18 April 2024

The Journey: Quantum’s Yellow Brick Road

The Journey: Quantum’s Yellow Brick Road

The world of computing is undergoing a revolution with two powerful forces converging: Quantum Computing (QC) and Generative Artificial Intelligence (GenAI). While GenAI is generating excitement, it’s still finding its footing in real-world applications. Meanwhile, QC is rapidly maturing, offering solutions to complex problems in fields like drug discovery and material science.

This journey, however, isn’t without its challenges. Just like Dorothy and her companions in the Wizard of Oz, we face obstacles along the yellow brick road. This article aims to shed light on these challenges and illuminate a path forward.

From Bits to Qubits: A New Kind of Switch


Traditionally, computers relied on bits, simple switches that are either on (1) or off (0). Quantum computers, on the other hand, utilize qubits. These special switches can be 1, 0, or both at the same time (superposition). This unique property allows them to tackle problems that are impossible or incredibly difficult for traditional computers. Imagine simulating complex molecules for drug discovery or navigating intricate delivery routes – these are just a few examples of what QC excels at.

The Power and Peril of Quantum Supremacy


With great power comes great responsibility and potential danger. In 1994, Peter Shor developed a theoretical model that could break widely used public-key cryptography like RSA, the security system protecting our data. This method leverages the unique properties of qubits, namely superposition, entanglement, and interference, to crack encryption codes. While the exact timeframe is uncertain (estimates range from 3 to 10 years), some experts believe a powerful enough quantum computer could eventually compromise this system.

This vulnerability highlights the “Steal Now, Decrypt Later” (SNDL) strategy employed by some nation-states. They can potentially intercept and store encrypted data now, decrypting it later with a powerful quantum computer. Experts believe SNDL operates like a Man in the Middle attack, where attackers secretly intercept communication and potentially alter data flowing between two parties.

The Intersection of GenAI and Quantum: A Security Challenge


The security concerns extend to GenAI, as well. GenAI models are trained on massive datasets, often containing sensitive information like code, images, or medical records. Currently, this data is secured with RSA-2048 encryption, which could be vulnerable to future quantum computers.

The Yellow Brick Road to Secure Innovation


Imagine a world where GenAI accelerates drug discovery by rapidly simulating millions of potential molecules and interactions. This could revolutionize healthcare, leading to faster cures for life-threatening illnesses. However, the sensitive nature of this data requires the highest level of security. GenAI is our powerful ally, churning out potential drug candidates at an unprecedented rate. But we can’t share this critical data with colleagues or partners for fear of intellectual property theft while that data is being shared. Enter a revolutionary system that combines the power of GenAI with an encryption of Post-Quantum Cryptography (PQC) which is expected to be unbreakable. This “quantum-resistant” approach would allow researchers to collaborate globally, accelerating the path to groundbreaking discoveries.

Benefits

  • Faster Drug Discovery: GenAI acts as a powerful tool, rapidly analyzing vast chemical landscapes. It identifies potential drug candidates and minimizes potential side effects with unprecedented speed, leading to faster development of treatments.
  • Enhanced Collaboration: PQC encryption allows researchers to securely share sensitive data. This fosters global collaboration, accelerating innovation and bringing us closer to achieving medical breakthroughs.
  • Future-Proof Security: Dynamic encryption keys and PQC algorithms ensure the protection of valuable intellectual property from cyberattacks, even from future threats posed by quantum computers and advanced AI.
  • Foundational Cryptography: GenAI and Machine Learning (ML) will become the foundation of secure and adaptable communication systems, giving businesses and governments more control over their cryptography.
  • Zero-Trust Framework: The transition to the post-quantum world is creating a secure, adaptable, and identity-based communication network. This foundation paves the way for a more secure digital landscape.

Challenges

  • GenAI Maturity: While promising, GenAI models are still under development and can generate inaccurate or misleading results. Refining these models requires ongoing research and development to ensure accurate and reliable output.
  • PQC Integration: Integrating PQC algorithms into existing systems can be complex and requires careful planning and testing. This process demands expertise and a strategic approach. NIST is delivering standardized post-quantum algorithms (expected by summer 2024).
  • Standardization: As PQC technology is still evolving, standardization of algorithms and protocols is crucial for seamless adoption. This would ensure that everyone is using compatible systems.
  • Next-Generation Attacks: Previous cryptography standards didn’t require AI-powered defenses.  These new attacks will necessitate the use of AI in encryption and key management, creating an evolving landscape.
  • Orchestration: Cryptography is embedded in almost every electronic device. Managing this requires an orchestration platform that can efficiently manage, monitor, and update encryption across all endpoints.

The Journey Continues: Embrace the Opportunities

The path forward isn’t paved with yellow bricks, but with lines of code, cutting-edge algorithms, and unwavering collaboration. While the challenges may seem daunting, the potential rewards are truly transformative. Here’s how we can embrace the opportunities:

  • Investing in the Future: Continued research and development are crucial. Funding for GenAI development and PQC integration is essential to ensure the accuracy and efficiency of these technologies.
  • Building a Collaborative Ecosystem: Fostering collaboration between researchers, developers, and policymakers is vital. Open-source platforms and knowledge-sharing initiatives will accelerate progress and innovation.
  • Equipping the Workforce: Education and training programs are necessary to equip the workforce with the skills needed to harness the power of GenAI and PQC. This will ensure a smooth transition and maximize the potential of these technologies.
  • A Proactive Approach to Security: Implementing PQC algorithms before quantum supremacy arrives is vital. A proactive approach minimizes the risk of the “Steal Now, Decrypt Later” strategy and safeguards sensitive data.

The convergence of GenAI and QC is not just a technological revolution, it’s a human one. It’s about harnessing our collective ingenuity to solve some of humanity’s most pressing challenges. By embracing the journey, with all its complexities and possibilities, we can pave the way for a golden future that is healthier, more secure, and brimming with innovation.

Source: cisco.com

Saturday, 13 April 2024

Maximize Managed Services: Cisco ThousandEyes Drives MSPs Towards Outstanding Client Experiences

Maximize Managed Services: Cisco ThousandEyes Drives MSPs Towards Outstanding Client Experiences

IT related outages and performance issues can inflict significant financial and operational harm on businesses, especially in critical sectors such as finance, healthcare, and e-commerce. These IT disruptions not only impact productivity potentially costing enterprises billions annually, but also negatively affect end-users through poor experiences like delays and inaccessibility. The vitality of business applications is crucial, as their availability and performance directly influence stakeholder impact, operational continuity, and profitability. Resolving these issues is often complex and labor-intensive. Any system-related downtime or lapse in application performance can ultimately lead to long-term setbacks for an organization.

Typical Troubleshooting Scenario of IT Infrastructure Without ThousandEyes


As soon as an end-user reports an IT related issue, whether it’s a service outage or slow application performance, the formidable challenge of locating and fixing the issue begins. It’s often like searching for a “needle in a haystack.” The troubleshooting journey to uncover the underlying cause of the IT infrastructure related issues typically involves and unfolds with the following challenges:

  • Prolonged Troubleshooting and Finger Pointing – Organizations frequently encounter difficulties in addressing IT outages and performance problems due to limited network visibility and siloed IT teams. This situation fosters a blame culture and hinders collaboration, as teams focus more on debating the cause of issues rather than fixing them, leading to inefficient use of time and resources in Incident Response efforts.
  • Limited End-to-End Visibility – Infrastructure and operations professionals struggle to gain a complete and clear understanding of the end-user experience due to the “black box” nature of the Internet and inadequate traditional monitoring tools. These tools often fail to provide detailed performance data across devices, applications, and the Internet, complicating IT teams’ efforts to pinpoint root causes of issues.
  • Inefficient Resource Allocation – Addressing outages and performance issues consumes significant time and diverts IT resources from strategic initiatives. In-house monitoring systems frequently produce false alerts, misallocating resources and impeding the IT’s capacity to effectively maintain and optimize infrastructure performance.

ThousandEyes tackles these prevalent challenges by presenting an integrated solution with end-to-end visibility into both network infrastructure and application performance. This exceptional level of insight and actionable intelligence enables IT teams to work together with greater synergy, ability to pinpoint and rectify issues, and optimize the deployment of their IT operations resources. The unparalleled abilities of ThousandEyes sets it apart from other platforms by greatly enhancing the visibility and understanding of the components within an environment.

Enhancing Managed Network Services for Client Success


ThousandEyes, a Digital Experience Assurance (DXA) platform, equips organizations with comprehensive insights into user experiences and application performance across the Internet, cloud services, and internal IT infrastructure, thereby streamlining the optimization of essential network-dependent services and applications. This platform can significantly expedite problem resolution and reduce the resources required to address common infrastructure problems by offering the following benefits:

  • Visibility – ThousandEyes provides MSPs with a holistic view that encompasses their clients’ internal networks as well as external networks, cloud services, and SaaS platforms. This end-to-end visibility allows MSPs to oversee and address issues throughout the entire digital supply chain, from core infrastructure to the application level. With this extensive coverage, MSPs are equipped to quickly locate the source of any issue across the network spectrum, thereby shortening the time required to identify problems.
  • Troubleshooting – ThousandEyes streamlines the troubleshooting process by swiftly pinpointing the root causes of infrastructure related issues, whether they occur within the enterprise network or are due to external factors like ISPs, cloud providers, or SaaS applications. The platform fosters collaboration among IT teams by providing a unified data set, which helps eliminate finger-pointing and accelerates problem-solving, thereby significantly reducing the time required to resolve issues.
  • Digital Experience Assurance – ThousandEyes conducts comprehensive performance monitoring by tracking key network metrics like latency, packet loss, and jitter, along with application-level metrics that shed light on the user experience and the performance of web and API transactions. Additionally, the platform enhances DXA by simulating user transactions and scrutinizing the data pathways to end-users, ensuring that both customers and/or employees have effective access to business applications.
  • Alerting and Reporting – ThousandEyes enhances proactive IT management by providing intelligent alerting and comprehensive reporting. Users are notified of performance degradation in real-time and can access historical data and trend analysis for informed decision-making. This proactive alerting capability allows IT teams to identify and address anomalies early, potentially reducing the frequency and severity of incidents therefore minimizing their impact.
  • Optimization – Organizations can optimize network performance and enhance user experience by leveraging insights from both historical and real-time data on application and service delivery paths. This comprehensive understanding enables informed decision-making that not only addresses current performance issues but also helps prevent future ones, ultimately conserving time and resources.

ThousandEyes enhances organizational capability to deliver high-quality digital services through valuable insights and analytics, which strengthen network management capabilities and facilitate more effective decision-making and issue resolution. Although the extent of benefits or efficiency gains varies across different organizations, users commonly report marked improvements after implementing ThousandEyes, with some noting up to a 75% faster resolution of network problems and fewer outages and performance issues. Customers have reported substantial reductions in troubleshooting times, with tasks that previously took hours or days being cut down to mere minutes, thanks to ThousandEyes.

ThousandEyes Enhances the Service Offerings of MSPs, Greatly Improving the Overall Experience


Managed Service Providers can enhance their client’s network management and optimization by leveraging the following benefits of Cisco ThousandEyes:

  • Improved Service Level Agreements (SLAs): With detailed insights into network performance and the ability to quickly identify and resolve issues, MSPs can better adhere to, or further enhance their SLAs. This helps in maintaining a high level of service and can distinguish the MSP’s offerings in a competitive market.
  • Proactive Problem Resolution: ThousandEyes’ alerting system can notify MSPs of potential issues before they affect end-users. This proactive approach minimizes downtime and can help MSPs address problems before clients are even aware of them.
  • Enhanced Customer Experience: By ensuring that applications and services are running smoothly, MSPs can contribute to a better end-user experience for their clients’ customers. This is particularly important for customer-facing applications where performance directly impacts revenue and brand reputation.
  • Efficient Troubleshooting: With the comprehensive network telemetry from ThousandEyes, MSPs can swiftly identify the root cause, whether they stem from the client’s internal network, an ISP, or various cloud-based services. This capability decreases the average time required to resolve issues.
  • Data-Driven Decisions: The data collected by ThousandEyes can inform strategic decisions about network design, capacity planning, and performance optimization. MSPs can use this information to advise clients on how to improve their IT infrastructures.
  • Reporting and Communication: MSPs can use the detailed reports and visualizations provided by ThousandEyes to effectively communicate with clients about network health, ongoing issues, and resolved problems, enhancing transparency and trust.

ThousandEyes: Your Shortcut to Advanced Network Visibility


ThousandEyes simplifies deployment with its cloud-based SaaS model. It integrates smoothly into diverse environments using versatile agents, including the specialized Enterprise Agents—robust, dedicated monitoring nodes that provide deeper network insights. These Enterprise Agents can be deployed on-premises in data centers, within private clouds, or even across public cloud platforms like AWS, Google Cloud, and Azure to monitor network and application performance. Additionally, ThousandEyes provides a browser extension designed for monitoring of user experience. Furthermore, its compatibility with Cisco and Meraki infrastructure streamlines integration, facilitating easy embedding into current deployments. The straightforward web management interface streamlines configuration, and the platform’s API accessibility supports automation, making ThousandEyes a highly adaptable and effortless choice for comprehensive network visibility.

MSPs Can Now Leverage Consumption-Based Licensing for ThousandEyes


In addition to traditional Enterprise Agreement licensing vehicles, ThousandEyes is now available through the Cisco Managed Services Licensing Agreement (MSLA), a program that was designed to meet the specific requirements of MSPs. This consumption-based licensing model is flexible and scalable, fitting the service-based business models of MSPs by allowing them to pay based on consumption. The MSLA program allows MSPs to adjust their ThousandEyes usage without complex contract changes, facilitating quick adaptation to evolving market demands.

MSPs and Their Clients Can Garner Significant Return on Investment


The integration of ThousandEyes by MSPs leads to a worthwhile ROI and an enhancement of their managed service offerings, providing benefits for both the providers and their clients. MSPs experience a marked improvement in their ability to offer advanced network visibility, comprehensive performance monitoring, and proactive issue resolution. These capabilities result in elevated service quality and increased customer satisfaction. End users reap the rewards of more reliable and efficient network services, experiencing fewer disruptions and thus less impact on their business operations. Moreover, the operational efficiencies introduced by ThousandEyes help reduce costs and free up valuable resources, enabling MSPs to focus more on business expansion and continued services improvement. In a time when digital transformation and dependency on Internet and cloud services are growing, having complete network visibility is essential. ThousandEyes is critical in this landscape, acting as a GPS for the digital world, offering insights and guidance for effective and efficient navigation.

Source: cisco.com

Thursday, 11 April 2024

Quantum Security and Networking are Emerging as Lifelines in Our Quantum-powered Future

Quantum Security and Networking are Emerging as Lifelines in Our Quantum-powered Future

A metamorphosis continues to take shape with the rise of Post-Quantum Cryptography, Quantum Key Distribution, and the brave new world of Quantum Networking.

In the ever-evolving landscape of technology, quantum computing stands out as a beacon of both promise and challenge. As we delve into the world of quantum networking and security, we find ourselves at the intersection of groundbreaking innovation and urgent necessity.

Cisco believes that quantum networking is not just an intriguing concept. It drives our research and investment strategy around quantum computing. We see it as a critical path forward because it holds the key to horizontally scaling systems, including quantum computing systems. Imagine a future where quantum computers collaborate seamlessly across vast distances, solving complex problems that were previously insurmountable.

However, before we can realize the promise of quantum networking, we need to address the elephant in the room – security. When quantum computers become reality, our classical cryptographic methods will face an existential threat. These powerful machines will potentially break today’s encryption algorithms in seconds. Our digital fortresses are vulnerable.

This opens the question of what will happen when quantum computers enter the scene. The issue lies in key exchanges. In classical systems, we rely on public key infrastructure (PKI) to securely exchange keys. This has served us well, ensuring confidentiality and integrity. But quantum computers, with their uncanny ability to factor large numbers efficiently, disrupt this equilibrium. Suddenly, our once-secure secrets hang in the balance.

Getting to the heart of the matter, imagine a scenario that persists even in our current era – the ominous concept of “store now, decrypt later”. Picture an adversary intercepting encrypted data today. Biding their time, they await the moment when quantum supremacy becomes reality.

When that day dawns, they unleash their quantum beast upon the stored information. Our sensitive communications, financial transactions, and personal data will suddenly be laid bare, retroactively vulnerable to the quantum onslaught.

Post-Quantum Cryptography is gaining momentum


Enter Post-Quantum Cryptography (PQC). Recognizing the urgency of the coming quantum moment, the National Institute of Standards and Technology (NIST) has been evaluating PQC proposals and is expected to release its final standards for quantum-resistant cryptographic algorithms later this year. These algorithms are designed to withstand quantum attacks and while not perfect, they are intended to fill the gap until quantum-safe solutions mature.

Apple’s iMessage is a compelling proof point. Last year, Apple made a decisive move by announcing its adoption of PQC algorithms for end-to-end encryption. This strategic shift underscores the industry’s recognition of the looming quantum threat, especially around “store now, decrypt later” attacks, and the need to swiftly respond.

In the year ahead, as we move closer to the post-quantum world, PQC will continue to gain momentum as a data security solution. Cisco’s Liz Centoni shared insight in her tech predictions for 2024, highlighting the accelerating adoption of PQC as a software-based approach that works with conventional systems to protect data from future quantum attacks.

PQC will be used by browsers, operating systems, and libraries, and innovators will experiment with integrating it into protocols such as SSL/TLS 1.3, which governs classic cryptography. PQC will likely find its way into enterprises of every size and sector as they seek to safeguard their sensitive data from the threats posed by quantum computers.

Quantum Key Distribution is the holy grail


Beyond PQC lies the holy grail of quantum cryptography, which is Quantum Key Distribution (QKD). Last year, we accurately predicted that QKD would become more widely used, particularly within cloud computing, data centers, autonomous vehicles, and consumer devices like smartphones.

Unlike classical key exchange methods, QKD capitalizes on the no-cloning property inherent in quantum states whereby information encoded on one qubit cannot be copied or duplicated to another because quantum states are fragile, affected by any and every action such as measuring the state. In practical terms, that means an eavesdropper can always be discovered due to a “read” causing the photon state to change.

Consider a scenario where two parties, Bank A and Bank B, want to communicate securely. They use QKD, where Bank A sends quantum states (like polarized photons) to Bank B which measures them without knowing the original state.

The measurements are then used to create a shared key, based on a randomly selected subset of the transmitted state (measurement bases) reconciled between the two parties through an authenticated and encrypted classical channel. Since the eavesdropper does not know the random subset, any attempt to measure the transmitted information will be detected due to a disturbance in the quantum states.

The beauty lies in the provably secure nature of QKD — quantum mechanics forbids perfect cloning, rendering interception futile. In this dance of particles and principles, QKD stands as a lighthouse of security, promising a future where quantum and classical work in tandem to safeguard us.

For instance, integrating QKD in 5G communication infrastructure is becoming increasingly important. With QKD, organizations will be able to better protect the privacy and authenticity of data transmitted over low-latency, high-speed networks, explicitly addressing the security demands of the 5G era.

Efforts to make QKD solutions more accessible and interoperable are accelerating in response to the demand for even more secure data transfer. This is leading to commercialization and standardization initiatives that are expected to make QKD solutions more user friendly and cost effective, ultimately driving widespread adoption across new applications and sectors.

As strides continue toward achieving quantum-secure messaging, among the first organizations to more broadly implement PQC will likely be those responsible for critical infrastructure and essential government suppliers. Large enterprises and other organizations will follow, also implementing these algorithms within the next few years.

Quantum networking on the horizon


Depending on the desired level of security and performance required, Centoni explained that QKD can be used as either an alternative or a complement to PQC and, in the future, will also leverage quantum networking. However, she acknowledges that it’s early days for quantum networks.

So far, researchers have not successfully achieved sustained quantum networking on a large scale, but major discoveries and advancements are happening. Companies like Cisco, alongside cutting-edge leaders across various industries, are pouring billions into unlocking the awesome potential of quantum networks.

“Quantum networking will see significant new research and investment by government and financial services,” said Centoni. She predicts that this will also include sectors with high demand for data security and the kinds of workloads that perform well with quantum computers.

Quantum networking relies on teleportation principles of quantum mechanics to transmit information between two or more quantum computers. This takes place by manipulating qubits whereby they “entangle” with one another and enable instantaneous transfer of quantum information across vast distances – even when there’s no physical connection between the computers.

In the not-so-distant future, perhaps 4 to 5 years or more, quantum networking will inexorably emerge as a potent force. With quantum networking, quantum computers will be able to collaborate and exchange information to tackle intricate problems that no single quantum computer could solve on its own.

By leveraging the quantum principles of teleportation and non-cloning, quantum networking protocols will facilitate fast, reliable – and perhaps even unconditional – secure information exchange. Potential applications of quantum networking go far beyond cryptography, as well, to turbocharging drug discovery, artificial intelligence (AI), and materials science.

Looking to the post-quantum future


Today, quantum computers are at a very similar stage that mainframes were in the 1960s. Back then, very few organizations could afford those machines, which could fill an entire room. While QKD is now in use as a means of provably secure communication, quantum networking remains mainly theoretical.

QKD is the next generation of quantum cryptography, a step beyond PQC which is not provably secure because of the lack of a proof of mathematical hardness for the cryptographic algorithms. Quantum networking should be thought of as first, a substrate needed for QKD, and then building out larger and larger compute islands – such as data centers and LAN, then WAN – analogous to how classical computers were connected to build distributed computing.

The big challenge now, like the past, is to create quantum computers that can be both reliably and affordably scaled up and put into the hands of corporate, government, and research entities. As such, distributed quantum computing will be the primary driver for quantum networks. We may even see the advent of the quantum cloud and the quantum internet – the metamorphic network of the future.

Quantum networking and security are not mere buzzwords. They are our lifelines in a quantum-powered future. As we race against time, we must embrace quantum technologies while fortifying our defenses. The ultimate payoff is a network that’s more secure than anything we’ve known before — a network where quantum and classical dance harmoniously, protecting our digital existence.

Source: cisco.com

Tuesday, 9 April 2024

Mastering Skills with Play: The Fusion of Gaming and Learning in Black Belt Gamification

Mastering Skills with Play: The Fusion of Gaming and Learning in Black Belt Gamification

Welcome to the immersive world of gamified learning, where the addictive pull of mobile gaming and the interactive rewards system of apps like Duolingo are not just for play—they’re the driving force behind our approach to Cisco Black Belt Academy gamification. We strive to transform enablement by harnessing the potent allure of game mechanics, making the learning process not just more engaging but also more impactful. Discover how we integrate the principles of game design to elevate and energize conventional enablement methodologies.

Black Belt gamified enablement incorporates game elements like points, badges, challenges, customizable avatars and themed stories into the learning process, to encourage user interaction and competition via leaderboards. Our objective is to make acquiring new knowledge more engaging and interactive, fostering a sense of accomplishment, and healthy competition among learners.

Classic versus Contemporary: A Comparative Outlook


During our research into gamified learning, we found that traditional training methods often struggle to keep learners engaged, leading to decreased retention and motivation.

The Gamification initiative began to further improve and innovate Black Belt Academy enablement. In today’s fast-paced world, keeping our learners engaged and up-to-speed is crucial. Gamified enablement is a dynamic approach that addresses this by tapping into our natural desire for competition, recognition, and accomplishment.

Our objective has been to use gamification to drive Black Belt participation on a broader level with our partners while deepening their knowledge and making it more fun and hands-on for the learners. ​

Innovation and Opportunities


Partners with 30% of employees engaged in Black Belt grew 10% basis points faster and Partners with above average participation grew 3% faster. Adding layers of gamification will give us the opportunity to increase enablement engagement driving more users, improved completion rate & higher continuation rates (S2/S3), and higher user loyalty.

Cisco Black Belt Academy has planned and implemented gamification strategies in three categories:

Mastering Skills with Play: The Fusion of Gaming and Learning in Black Belt Gamification

1. Single Tournaments are where partner individuals register to compete against others in a single/one-off lab-like (short period) environment where the individual who gets the most points wins.

2. Journey Competitions are where partner individuals register to compete against other individuals over a long period of time with the end goal to get to the top of the tournament table.​

3. Races are where partner individuals register to race against others by completing trainings the quickest. Only a certain number of individuals are rewarded in the end.

Our innovative Escape Room has been met with widespread acclaim and attention. In this space-themed adventure, participants are cast as crew members of a spaceship that has crash-landed on an alien planet. To escape, they must leverage their Cisco Security expertise to locate and gather essential repair elements (crystals) needed to restore their spacecraft.

Mastering Skills with Play: The Fusion of Gaming and Learning in Black Belt Gamification

Our team at Cisco Black Belt Academy is committed to enhancing the partner experience by infusing our platform with engaging value communications. We are focused on integrating gamification elements, create captivating content that keeps learners engaged throughout their gaming experience while also providing meaningful rewards and incentives that align with their in-game achievements.

Mastering Skills with Play: The Fusion of Gaming and Learning in Black Belt Gamification

Source: cisco.com

Saturday, 6 April 2024

Meet the new Cisco Catalyst 1200 and 1300 Series Switches for SMBs

In today’s hyperconnected world where seamless customer experience is the key to success, your network can often become the differentiator that helps you succeed. This is true not just for large enterprises, but also for small and medium businesses.

Through Cisco’s small and medium business portfolio, we have been bringing the latest technology to our SMB customers and helping them create secure, reliable networks that can be effortlessly setup, monitored and managed; all at prices that fit small business budgets.

The new Cisco Catalyst 1200 and 1300 series switches are the latest additions to our small and medium business portfolio of access switches with Linux-based OS that combine powerful network performance, simplified management, and reliability with a comprehensive suite of network features that enable the digital transformation of growing businesses and branch offices.

Meet the new Cisco Catalyst 1200 and 1300 Series Switches for SMBs
Cisco Catalyst 1200 Series Switches

Meet the new Cisco Catalyst 1200 and 1300 Series Switches for SMBs
Cisco Catalyst 1300 Series Switches

These switches have been designed to help customers focus on growing their business rather than spending their time managing IT, by offering the following benefits:

Simplicity – Simple management with web-based configuration, Cisco Business Mobile App and Cisco Business Dashboard. Auto discovery for easy integration with Collab and Wi-Fi products.

Flexibility – Ultimate business flexibility with Gigabit, Multigigabit and 10G connectivity, Gigabit or 10G uplinks, and PoE+ support up to 740W.

Security – Advanced security protocols providing a solid security foundation, ensuring privacy and business continuity.

Cisco Catalyst 1200 Series Switches


The Cisco Catalyst 1200 Series Switches are purpose-built for growing businesses, combining robust performance & reliability with ease of setup, monitoring & management. These switches provide comprehensive security capabilities, Layer 3 static routing features, & multiple PoE+ options to choose from.

Cisco Catalyst 1300 Series Switches


The Cisco Catalyst 1300 Series Switches are fixed, managed, enterprise-class Layer 3 switches designed for small and medium-sized business and branch offices. They offer advanced security features, front-panel stacking capabilities, gigabit, multi-gigabit and 10 gig-ethernet options, and Layer 3 RIP routing, with a POE+ budget up to 740W.

Which one do you need?


The following table compares the prominent features of Catalyst 1200 and 1300 series switches:

Meet the new Cisco Catalyst 1200 and 1300 Series Switches for SMBs

With the Cisco Catalyst 1200 and 1300 Series switches, there are no licenses to purchase, and software updates are available at no additional cost. The switches offer a limited lifetime warranty with one-year free phone support.

Customers who wish to deploy themselves can purchase the new Cisco Catalyst 1200 and 1300 series switches through eComm partners such as Amazon.com or other e-tailers. Cisco partners can contact their distributor of choice.

Source: cisco.com

Thursday, 4 April 2024

Balancing agility and predictability to achieve major engineering breakthroughs

Balancing agility and predictability to achieve major engineering breakthroughs

I shared the progress we’re making toward building the Cisco Security Cloud, an open, integrated security platform capable of tackling the rigors of securing highly distributed, multicloud environments. This was an honest assessment of what we have achieved and celebrating our significant accomplishments, moving the needle forward on our vision. I want to share how we approach our research, development, execution and what are our core principles to driving innovation at scale.

In any large organization with a diverse enterprise-grade portfolio varying in adoption levels, solution longevity, and product category maturity, you will find the need to continuously look for ways and means to drive efficiency and excellence. We are fortunate to have loyal customers who trust that with Cisco, they can both secure and manage risk to their organization. Our focus has been to meet customers where they are, and that involves delivering security solutions in various form factors and platforms for a hybrid, multi-cloud world.

To do this, we are evolving our engineering organization to deliver on ambitious goals through higher levels of agility. Agility requires the courage to break down organizational silos and embrace the notion of failing fast and learning even faster from those failures. But engineering organizations like ours also have our “day jobs” with the reality that constantly changing customer and business environments can wreak havoc on engineering roadmaps. This leads to the inevitable difficult decision on whether to focus on the backlog of customer-requested features, versus delivering new, innovative features that move the industry forward.

Another way to say this is that as much as engineering organizations strive for agility, we have to be cognizant of how much our customers crave predictability in terms of their security operations and  feature delivery from vendors like Cisco. Let’s look at this from the lens of a customer-impacting factor that may make security operations less predictable: security incidents.

Balancing agility and predictability to achieve major engineering breakthroughs

These numbers are meaningful because cybersecurity is a critical part of any business and part of business resilience plans, which can involve public disclosures. Cybersecurity is also in the line of critical operations functions and can be a cause of major disruptions for the entire business when it fails. So, that is the high-stakes nature of the balancing act we have in front of us with one end of the see-saw being our desire to achieve agility with the other end being our responsibility to our customers to be predictable in their security operations, which are becoming ever more critical in the viability of their businesses.

A pragmatic approach to balancing agility and predictability


Leading a large engineering organization in charge of one of the broadest security product portfolios has challenged me to think about this critically. There are many ways to balance agility and predictability, but we’ve been able to distill this down to a pragmatic approach that I believe works best for us.

Careful short and long-term planning.

This is a critical step that provides the framework for building an engineering org that is both agile and predictable. It starts with iterative planning that allows for reviewing and adjusting plans based on market feedback and changing conditions. This includes meeting shorter-term commitments and regular updates to maintain customer confidence while allowing for adjustments. We also use agile retrospectives and adaptive planning to ensure forward progress and our ability to incrementally improve.

Resource allocation and ruthless prioritization play a key role. We achieve this through segmentation and portfolio management, segmenting a product portfolio into different categories based on levels of predictability and innovation. We exercise scenario planning for risk mitigation and management, developing scenarios that explore different market conditions with strategies for responding to ensure we make informed decisions in uncertain conditions. This helps us identify and mitigate risks that may impact our agility and predictability, account for potential disruptions, prioritize appropriately, and manage expectations.

Clear and consistent communication.

One of the most important aspects of this is the need for clear and consistent communication. As leader, it is my responsibility to clearly articulate the benefits of agility and explain the steps we need to take to ensure the predictability and delivery needed for stable operations. My philosophy is that shared outcomes involve “shared code” that results in a platform-centric development approach and an inner source execution model that allow for acceleration of feature development and delivery velocity.

An org culture willing to adapt.

Even the best of plans will fail without capable people who can and are willing to execute on them. For us, this involves an on-going evolution across our large, highly distributed engineering organization to foster a culture that values both agility and predictability and aligned with one of Cisco’s core values: accountability. A few of the ways we’ve seen success are by:
  • Encouraging cross-functional collaboration and open dialogue about the challenges and benefits of both approaches.
  • Ensuring leadership is aligned with the organization’s approach to balancing agility and predictability.
  • Creating opportunities, like Hackathons, to fail fast and learn even faster, explore the art of the possible, and to dive into technology to solve unexpected challenges.
  • Ensuring consistent messaging and support for team members.

Effective processes, not bureaucracies.

Processes often get a bad rap because they are often associated with bureaucracies that can hinder speed and progress. But processes are critical to make sure we’re executing our plans in the intended ways with the ability to measure progress and adapt as necessary. In our goal to balance agility with predictability, we have implemented some specific aspects to processes that work best for us.

  • We blend agile methodologies with more traditional project management approaches (e.g., agile for new features, waterfall for foundational infrastructure). Our processes allow us to take a “dual plane” approach to innovation with one plane focusing on predictable, stable delivery while the other explores innovative, experimental initiatives.
  • As the aphorism goes, “you can’t manage what you can’t measure”. We have implemented an outcome-focused approach toward metrics that shifts the focus from output (deliverables) to outcomes (business value). This allows us to demonstrate how agility enhances the ability to deliver value quickly and adapt to market changes, solving some of the toughest challenges for our customers.
  • We take a customer-centric approach in all things we do. This means we use customer feedback and market insights to prioritize and guide innovation efforts. This includes dedicated customer advisory boards, and programs built around the voice of our customers like NPS surveys. This helps ensure that agility is directed toward meeting customer needs and not innovating for innovation’s sake.

Our processes involve adaptive governance and continuous learning that accommodates both agility and predictability. This includes providing guidelines for making decisions in dynamic situations, continuously assessing what’s working and what’s not, and encouraging a learning mindset and adjusting strategies accordingly.

Innovating to win


Taking a customer centric approach to all things we do, we’ll continue focusing on the breakthrough successes that showcase our ability to be both agile and predictable to meet market demands and deliver customer outcomes. One example of this is how we, as the official cybersecurity partner of the NFL, helped secure this year’s Super Bowl that was the most watched telecast in this game’s history. We also continue our incredible work with AI and Generative AI like the Cisco AI Assistant for Security to simplify policy, and AI-enabled security operations through innovation for both AI for security and security for AI. When we strike the balance of agility and predictability, we innovate to win.

Source: cisco.com