Tuesday, 30 April 2024

Bridging the Digital Divide with Subscriber Edge

Bridging the Digital Divide with Subscriber Edge

Bridging the digital divide has been a longstanding top priority for countries globally. According to Broadband Research, in 2023 approximately five billion people (64% of the world’s population) were connected to the internet. That means roughly three billion people do not have the basic digital necessities such as access to data, sharing information, or communicating. In addition, they do not have the same access to educational, employment, and economic opportunities that could improve the quality of their lives through digital connections.

The World Bank has estimated that increasing the percentage of people with internet access to 75% would “boost the developing world’s collective GDP by $2 trillion and create 140 million new jobs.”

The good news is that the public and private sectors have been partnering to help close the digital divide, but as Broadband Research states: “Factors like increased affordability of devices, improved infrastructure and innovative services drive this growth.

Role of subscriber edge


Accessing the internet requires a subscription to a broadband service from communications service providers (CSPs), using either cable, fiber, DSL, fixed wireless access (FWA), satellite or 4G/5G infrastructure and devices. Subscriber edge is the access point function for subscribers in a service provider network through which they connect to the broadband network for high-speed connectivity, such as accessing the internet.

Subscriber edge can be deployed with other services on the same platform by converging residential and enterprise services using multiservice nodes. Subscriber edge solutions involve managing subscriber sessions, and include functions like IP address management, policy and quality of service (QoS) enforcement, and secure access to the network through authentication and billing.

Shifting application and traffic demands


Traditional approaches for offering broadband services can be revenue-impacting and could affect the quality of experience (QoE) for a broadband user (see Figure 1). For example, with the advancement of applications and evolving transport protocols—such as Quick UDP Internet Connections (QUIC) and Transmission Control Protocol/ Transport Layer Security (TCPLS)—traffic patterns within broadband networks are experiencing a shift away from traditional transport protocols. These new protocols offer greater control to end-user applications, which reduces the dependency on the underlying broadband network and requires relatively simpler QoS models.

This shift is a pivotal opportunity for CSPs to simplify and modernize their complex traditional broadband networks to address higher bandwidth demands, growing user base, increasing video traffic and rising costs. As a result, there is a need to relook at subscriber edge with the overall subscriber services network design, and address important areas such as:

  • Subscriber anchor point in the network
  • Selection of subscriber edge devices and architecture
  • Improve end-user experience and offer new services
  • Address rising network costs

Bridging the Digital Divide with Subscriber Edge
Figure 1. Traditional broadband centralized architecture

Source: cisco.com

Saturday, 27 April 2024

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

In the highly dynamic and ever-evolving world of enterprise computing, data centers serve as the backbones of operations, driving the need for powerful, scalable, and energy-efficient server solutions. As businesses continuously strive to refine their IT ecosystems, recognizing and capitalizing on data center energy-saving attributes and design innovations is essential for fostering sustainable development and maximizing operational efficiency and effectiveness.

Cisco’s Unified Computing System (UCS) stands at the forefront of this technological landscape, offering a comprehensive portfolio of server options tailored to meet the most diverse of requirements. Each component of the UCS family, including the B-Series, C-Series, HyperFlex, and X-Series, is designed with energy efficiency in mind, delivering performance while mitigating energy use. Energy efficiency is a major consideration, starting from the beginning of the planning and design phases of these technologies and products all the way through into each update.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

The UCS Blade Servers and Chassis (B-Series) provide a harmonious blend of integration and dense computing power, while the UCS Rack-Mount Servers (C-Series) offer versatility and incremental scalability. These offerings are complemented by Cisco’s UCS HyperFlex Systems, the next-generation of hyper-converged infrastructure that brings compute, storage, and networking into a cohesive, highly efficient platform. Furthermore, the UCS X-Series takes flexibility and efficiency to new heights with its modular, future-proof architecture.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS B-Series Blade Chassis and Servers

The Cisco UCS B-Series Blade Chassis and Servers offer several features and design elements that contribute to greater energy efficiency compared to traditional blade server chassis. The following components and functions of UCS contribute to this efficiency:

1. Unified Design: Cisco UCS incorporates a unified system that integrates computing, networking, storage access, and virtualization resources into a single, cohesive architecture. This integration reduces the number of physical components needed, leading to lower power consumption compared to traditional setups where these elements are usually separate and require additional power.

2. Power Management: UCS includes sophisticated power management capabilities at both the hardware and software levels. This enables dynamic power allocation based on workload demands, allowing unused resources to be powered down or put into a low-power state. By adjusting power usage according to actual requirements, the wasting of energy is minimized.

3. Efficient Cooling: The blade servers and chassis are designed to optimize airflow and cooling efficiency. This reduces the need for excessive cooling, which can be a significant contributor to energy consumption in data centers. By efficiently managing airflow and cooling, Cisco UCS helps minimize the overall energy required for server operation.

4. Higher Density: UCS Blade Series Chassis typically support higher server densities compared to traditional blade server chassis. By consolidating more computing power into a smaller physical footprint, organizations can achieve greater efficiency in terms of space utilization, power consumption, and cooling requirements.

5. Virtualization Support: Cisco UCS is designed to work seamlessly with virtualization technologies such as VMware, Microsoft Hyper-V, and others. Virtualization allows for better utilization of server resources by running multiple virtual machines (VMs) on a single physical server. This consolidation reduces the total number of servers needed, thereby lowering energy consumption across the data center.

6. Power capping and monitoring: UCS provides features for power capping and monitoring, allowing administrators to set maximum power limits for individual servers or groups of servers. This helps prevent power spikes and ensures that power usage remains within predefined thresholds, thus optimizing energy efficiency.

7. Efficient Hardware Components: UCS incorporates energy-efficient hardware components such as processors, memory modules, and power supplies. These components are designed to deliver high performance while minimizing power consumption, contributing to overall energy efficiency.

Cisco UCS Blade Series Chassis and Servers facilitate greater energy efficiency through a combination of unified design, power management capabilities, efficient cooling, higher physical density, support for virtualization, and the use of energy-efficient hardware components. By leveraging these features, organizations can reduce their overall energy consumption and operational costs in the data center.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS C-Series Rack Servers

Cisco UCS C-Series Rack Servers are standalone servers that tend to be more flexible in terms of deployment and may be easier to cool individually. They are often more efficient in environments where fewer servers are required or when full utilization of a blade chassis is not possible. In such cases, deploying a few rack servers can be more energy-efficient than powering a partially empty blade chassis.

The Cisco UCS Rack Servers, like the Blade Series, have been designed with energy efficiency in mind. The following aspects contribute to the energy efficiency of UCS Rack Servers:

1. Modular Design: UCS Rack Servers are built with a modular design that allows for easy expansion and servicing. This means that components can be added or replaced as needed without unnecessary wasting resources.

2. Component Efficiency: Like the Blade Series, UCS Rack Servers use high-efficiency power supplies, voltage regulators, and cooling fans. These components are chosen for their ability to deliver performance while minimizing energy consumption.

3. Thermal Design: The physical design of the UCS Rack Servers helps to optimize airflow, which can reduce the need for excessive cooling. Proper thermal management ensures that the servers maintain an optimal operating temperature, which contributes to energy savings.

4. Advanced CPUs: UCS Rack Servers are equipped with the latest processors that offer a balance between performance and power usage. These CPUs often include features that reduce power consumption when full performance is not required.

5. Energy Star Certification: Many UCS Rack Servers are Energy Star certified, meaning they meet strict energy efficiency guidelines set by the U.S. Environmental Protection Agency.

6. Management Software: Cisco’s management software allows for detailed monitoring and control of power usage across UCS Rack Servers. This software can help identify underutilized resources and optimize power settings based on the workload.

Cisco UCS Rack Servers are designed with energy efficiency as a core principle. They feature a modular design that enables easy expansion and servicing, high-efficiency components such as power supplies and cooling fans, and processors that balance performance with power consumption. The thermal design of these rack servers optimizes airflow, contributing to reduced cooling needs.

Additionally, many UCS Rack Servers have earned Energy Star certification, indicating compliance with stringent energy efficiency guidelines. Management software further enhances energy savings by allowing detailed monitoring and control over power usage, ensuring that resources are optimized according to workload demands. These factors make UCS Rack Servers a suitable choice for data centers focused on minimizing energy consumption while maintaining high performance.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS S-Series Storage Servers

The Cisco UCS S-Series servers are engineered to offer high-density storage solutions with scalability, which leads to considerable energy efficiency benefits when compared to the UCS B-Series blade servers and C-Series rack servers. The B-Series focuses on optimizing compute density and network integration in a blade server form factor, while the C-Series provides versatile rack-mount server solutions. In contrast, the S-Series emphasizes storage density and capacity.

Each series has its unique design optimizations; however, the S-Series can often consolidate storage and compute resources more effectively, potentially reducing the overall energy footprint by minimizing the need for additional servers and standalone storage units. This consolidation is a key factor in achieving greater energy efficiency within data centers.

The UCS S-Series servers incorporate the following features that contribute to energy efficiency:

  1. Efficient Hardware Components: Similar to other Cisco UCS servers, the UCS S-Series servers utilize energy-efficient hardware components such as processors, memory modules, and power supplies. These components are designed to provide high performance while minimizing power consumption, thereby improving energy efficiency.
  2. Scalability and Flexibility: S-Series servers are highly scalable and offer flexible configurations to meet diverse workload requirements. This scalability allows engineers to right-size their infrastructure and avoid over-provisioning, which often leads to wasteful energy consumption.
  3. Storage Optimization: UCS S-Series servers are optimized for storage-intensive workloads by offering high-density storage options within a compact form factor. With consolidated storage resources via fewer physical devices, organizations can reduce power consumption associated with managing and powering multiple storage systems.
  4. Power Management Features: S-Series servers incorporate power management features similar to other UCS servers, allowing administrators to monitor and control power usage at both the server and chassis levels. These features enable organizations to optimize power consumption based on workload demands, reducing energy waste.
  5. Unified Management: UCS S-Series servers are part of the Cisco Unified Computing System, which provides unified management capabilities for the entire infrastructure, including compute, storage, and networking components. This centralized management approach helps administrators efficiently monitor and optimize energy usage across the data center.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS HyperFlex HX-Series Servers

The Cisco HyperFlex HX-Series represents a fully integrated and hyperconverged infrastructure system that combines computing, storage, and networking into a simplified, scalable, and high-performance architecture designed to handle a wide array of workloads and applications.

When it comes to energy efficiency, the HyperFlex HX-Series stands out by further consolidating data center functions and streamlining resource management compared to the traditional UCS B-Series, C-Series, and S-Series. Unlike the B-Series blade servers which prioritize compute density, the C-Series rack servers which offer flexibility, or the S-Series storage servers which focus on high-density storage, the HX-Series incorporates all of these aspects into a cohesive unit. By doing so, it reduces the need for separate storage and compute layers, leading to potentially lower power and cooling requirements.

The integration inherent in hyperconverged infrastructure, such as the HX-Series, often results in higher efficiency and a smaller energy footprint as it reduces the number of physical components required, maximizes resource utilization, and optimizes workload distribution; all of this contributes to a more energy-conscious data center environment.

The HyperFlex can contribute to energy efficiency in the following ways:

  1. Consolidation of Resources: HyperFlex integrates compute, storage, and networking resources into a single platform, eliminating the need for separate hardware components such as standalone servers, storage arrays, and networking switches. By consolidating these resources, organizations can reduce overall power consumption when compared to traditional infrastructure setups that require separate instances of these components.
  2. Efficient Hardware Components: HyperFlex HX-Series Servers are designed to incorporate energy-efficient hardware components such as processors, memory modules, and power supplies. These components are optimized for performance, per watt, helping to minimize power consumption while delivering the necessary robust compute and storage capabilities.
  3. Dynamic Resource Allocation: HyperFlex platforms often include features for dynamic resource allocation and optimization. This may include technologies such as VMware Distributed Resource Scheduler (DRS) or Cisco Intersight Workload Optimizer, which intelligently distribute workloads across the infrastructure to maximize resource utilization and minimize energy waste.
  4. Software-Defined Storage Efficiency: HyperFlex utilizes software-defined storage (SDS) technology, which allows for more efficient use of storage resources compared to traditional storage solutions. Features such as deduplication, compression, and thin provisioning help to reduce the overall storage footprint, resulting in lower power consumption associated with storage devices.
  5. Integrated Management and Automation: HyperFlex platforms typically include centralized management and automation capabilities that enable administrators to efficiently monitor and control the entire infrastructure from a single interface. This combined integration management approach can streamline operations, optimize resource usage, and identify opportunities for energy saving.
  6. Scalability and Right-Sizing: HyperFlex allows organizations to scale resources incrementally by adding additional server nodes to the cluster as needed. This scalability enables organizations to custom fit their infrastructure and avoid over-provisioning, which can lead to unnecessary energy consumption.
  7. Efficient Cooling Design: HyperFlex systems are designed with extreme consideration for efficient cooling to maintain optimal operating temperatures for the hardware components. By optimizing airflow and cooling mechanisms within the infrastructure, HyperFlex helps minimize energy consumption associated with cooling systems.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS X-Series Modular System

The Cisco UCS X-Series is a versatile and innovative computing platform that elevates the concept of a modular system to new heights, offering a flexible, future-ready solution for the modern data center. It stands apart from the traditional UCS B-Series blade servers, C-Series rack servers, S-Series storage servers, and even the integrated HyperFlex HX-Series hyperconverged systems, in that it provides a unique blend of adaptability and scalability. The X-Series is designed with a composable infrastructure that allows dynamic reconfiguration of computing, storage, and I/O resources to match specific workload requirements.

In terms of energy efficiency, the UCS X-Series is engineered to streamline power usage by dynamically adapting to the demands of various applications. It achieves this through a technology that allows components to be powered on and off independently, which can lead to significant energy savings compared to the always-on nature of B-Series and C-Series servers. While the S-Series servers are optimized for high-density storage, the X-Series can reduce the need for separate high-capacity storage systems by incorporating storage elements directly into its composable framework. Furthermore, compared to the HyperFlex HX-Series, the UCS X-Series may offer even more granular control over resource allocation, potentially leading to even better energy management and waste reduction.

The UCS X-Series platform aims to set a new standard for sustainability by optimizing power consumption across diverse workloads, minimizing the environmental impact, and lowering the total cost of ownership (TCO) through improved energy efficiency. By intelligently consolidating and optimizing resources, the X-Series promises, and has proven to be, a forward-looking solution that responds to the growing need for eco-friendly and cost-effective data center operations.

The Cisco UCS X-Series can contribute to energy efficiency in the following ways:

  1. Integrated Architecture: Cisco UCS X-Series combines compute, storage, and networking into a unified system, reducing the need for seperate components. This consolidation leads to lower overall energy consumption compared to traditional data center architectures.
  2. Energy-Efficient Components: The UCS X-Series is built with the latest energy-efficient technologies; CPUs, memory modules, and power supplies in the X-Series are selected for their performance-to-power consumption ratio, ensuring that energy use is optimized without sacrificing performance.
  3. Intelligent Workload Placement: Cisco UCS X-Series can utilize Cisco Intersight and other intelligent resource management tools to distribute workloads intelligently and efficiently across available resources, optimizing power usage and reducing unnecessary energy expenditure.
  4. Software-Defined Storage Benefits: The X-Series can leverage software-defined storage which often includes features like deduplication, compression, and thin provisioning to make storage operations more efficient and reduce the energy needed for data storage.
  5. Automated Management: With Cisco Intersight, the X-Series provides automated management and orchestration across the infrastructure, helping to streamline operations, reduce manual intervention, and cut down on energy usage through improved allocation of resources.
  6. Scalable Infrastructure: The modular design of the UCS X-Series allows for easy scalability, thus allowing organizations to add resources only as needed. This helps prevent over-provisioning and the energy costs associated with idle equipment.
  7. Optimized Cooling: The X-Series chassis is designed with cooling efficiency in mind, using advanced airflow management and heat sinks to keep components at optimal temperatures. This reduces the amount of energy needed for cooling infrastructure.

Mindful energy consumption without compromise


Cisco’s UCS offers a robust and diverse suite of server solutions, each engineered to address the specific demands of modern-day data centers with a sharp focus on energy efficiency. The UCS B-Series and C-Series each bring distinct advantages in terms of integration, computing density, and flexible scalability, while the S-Series specializes in high-density storage capabilities. The HyperFlex HX-Series advances the convergence of compute, storage, and networking, streamlining data center operations and energy consumption. Finally, the UCS X-Series represents the pinnacle of modularity and future-proof design, delivering unparalleled flexibility to dynamically meet the shifting demands of enterprise workloads.

Across this entire portfolio, from the B-Series to the X-Series, Cisco has infused an ethos of sustainability, incorporating energy-efficient hardware, advanced power management, and intelligent cooling designs. By optimizing the use of resources, embracing virtualization, and enabling scalable, granular infrastructure deployments, Cisco’s UCS platforms are not just powerful computing solutions but are also catalysts for energy-conscious, cost-effective, and environmentally responsible data center operations.

For organizations navigating the complexities of digital transformation while balancing operational efficiency with the goal of sustainability, the Cisco UCS lineup stands ready to deliver performance that powers growth without compromising on our commitment to a greener future.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

Thursday, 25 April 2024

Understanding the Differences between SD-WAN and MPLS

Understanding the Differences between SD-WAN and MPLS

In the realm of networking, SD-WAN and MPLS are two terms that frequently arise, each offering distinct advantages and functionalities. In this comprehensive guide, we delve into the nuances of these technologies, providing clarity on their disparities and assisting you in making informed decisions for your network infrastructure.

What is SD-WAN?


SD-WAN, or Software-Defined Wide Area Network, is a modern approach to networking that utilizes software-defined networking (SDN) concepts to intelligently manage and optimize Wide Area Network (WAN) connections. Unlike traditional WAN setups that rely heavily on hardware, SD-WAN leverages software to dynamically route traffic across the network based on predefined policies and conditions.

Key Features of SD-WAN:


  1. Centralized Management: SD-WAN solutions offer centralized management interfaces that provide administrators with granular control over network configurations and traffic flow.
  2. Dynamic Path Selection: With SD-WAN, traffic is intelligently routed across multiple network paths, including broadband, MPLS, and LTE, based on real-time conditions such as link quality and latency.
  3. Application Awareness: SD-WAN platforms often incorporate deep packet inspection and application recognition capabilities, allowing for the prioritization of critical applications and traffic shaping based on application requirements.
  4. Cost Efficiency: By leveraging lower-cost internet connections alongside more expensive MPLS links, SD-WAN can significantly reduce WAN expenses without compromising performance or reliability.

Understanding MPLS


MPLS, or Multiprotocol Label Switching, is a legacy networking technology commonly used for building private, high-performance WANs. MPLS operates by assigning labels to network packets, enabling routers to make forwarding decisions based on these labels rather than IP addresses.

Key Features of MPLS:


  1. Traffic Engineering: MPLS networks support traffic engineering capabilities, allowing administrators to optimize network paths and allocate bandwidth efficiently.
  2. Quality of Service (QoS): MPLS offers robust QoS mechanisms, ensuring that critical applications receive the necessary bandwidth and latency guarantees to maintain optimal performance.
  3. Security: MPLS inherently provides a higher level of security compared to public internet connections, as traffic remains within the confines of the private MPLS network, reducing exposure to external threats.
  4. Reliability: MPLS networks are known for their reliability and predictability, making them ideal for applications that require consistent performance and uptime.

Contrasting SD-WAN and MPLS


While both SD-WAN and MPLS serve the purpose of connecting geographically dispersed locations within an organization, they differ significantly in terms of architecture, cost, and flexibility.

Architecture:

  • SD-WAN: SD-WAN architectures are decentralized and software-driven, offering flexibility and scalability to adapt to changing network requirements rapidly.
  • MPLS: MPLS networks are centralized and hardware-dependent, typically requiring substantial upfront investments in infrastructure and equipment.

Cost:

  • SD-WAN: SD-WAN solutions often provide cost savings compared to MPLS, particularly for organizations with diverse connectivity requirements or those seeking to augment MPLS with lower-cost internet links.
  • MPLS: MPLS services can be costly, primarily due to the need for dedicated circuits and long-term contracts with service providers.

Flexibility:

  • SD-WAN: SD-WAN architectures offer unparalleled flexibility, allowing organizations to seamlessly integrate various transport technologies and cloud services into their network environments.
  • MPLS: MPLS networks are less flexible, with limited support for cloud connectivity and scalability compared to SD-WAN solutions.

Conclusion

In summary, both SD-WAN and MPLS have their merits and are suited to different network environments and business requirements. SD-WAN excels in providing agility, cost efficiency, and flexibility, making it an attractive option for organizations seeking to modernize their network infrastructure. On the other hand, MPLS offers reliability, security, and quality of service, making it well-suited for mission-critical applications and industries with stringent compliance requirements.

Ultimately, the choice between SD-WAN and MPLS depends on factors such as budget, performance needs, and organizational priorities. By understanding the nuances of each technology, organizations can make informed decisions that align with their strategic objectives and drive business success.

Tuesday, 23 April 2024

Find Your Path to Unmatched Security and Unified Experiences

Find Your Path to Unmatched Security and Unified Experiences

Imagine juggling multiple remotes for your entertainment system, each controlling a different device and requiring endless button presses to achieve a simple task. This is what managing a complex network security landscape can feel like—a jumble of disparate solutions, each demanding your attention and contributing to confusion.

Today’s IT environment is no stranger to complexity. The rise of hybrid work, multicloud adoption, and more sophisticated cyberthreats have created a security landscape that traditional, siloed solutions simply cannot keep pace with. This leaves organizations vulnerable, jeopardizing the security of their data, applications, and user trust.

This is where convergence comes in. It’s just like having a single, universal remote for your entertainment system.

Secure access service edge (SASE) is this “universal remote” for your network security. It offers a converged approach that combines networking and security into a single, cloud-delivered service. By bringing security closer to the user and the cloud edge, organizations can help ensure comprehensive protection regardless of the user’s location or access point.

However, adopting SASE can feel like navigating a maze. Different vendors, complex integrations, and lengthy implementation times can leave you feeling lost. At Cisco, we understand the challenges you face and the need for simplicity. That’s why we’re committed to making your SASE journey simpler and more efficient.

Find Your Path to Unmatched Security and Unified Experiences
Figure 1: Evolve to full SASE—Catalyst SD-WAN and Secure Access integration

Introducing the integration of Cisco Catalyst SD-WAN and Cisco Secure Access, a cloud-delivered security service edge (SSE) solution. It’s a single, integrated SASE solution that unifies the power of Cisco Catalyst SD-WAN with the robust security of our SSE solution, Cisco Secure Access. This powerful duo forms the foundation of our integrated Cisco SASE solution, offering a simplified path to robust security and streamlined management.

You can think of Catalyst SD-WAN as the intelligent highway, optimizing network traffic flow and ensuring reliable connectivity. Cisco Secure Access, meanwhile, functions as the tollbooth or security checkpoint, allowing only authorized users and devices access. When these two solutions are integrated, they offer a streamlined and efficient approach to SASE, helping to ensure secure and efficient access for your data, applications, and users.

Catalyst SD-WAN and Cisco Secure Access (SSE) combine to transform your network’s performance and security. Through Catalyst SD-WAN’s advanced networking technology, your data is intelligently routed along the most efficient pathways, optimizing cloud application performance and reducing latency by connecting users to the nearest point of presence (PoP). This ensures enhanced redundancy and supports the high bandwidth demands of specialized regional sites, underpinning your network’s scalability and agility.

Cisco Secure Access serves as a robust cloud-based security shield, embodying the zero-trust approach by thoroughly verifying and continuously monitoring each access attempt, while diligently scanning internet traffic to safeguard your network against the spectrum of emerging cyberthreats.

The integration simplifies the transition to SASE by eliminating the complexities of multivendor environments. A unified management platform offers centralized control and oversight of both networking and security functions, significantly reducing operational complexity and saving IT resources. This comprehensive control enhances decision making, streamlines workflows, and ensures a cohesive security posture across the entire network infrastructure.

Let’s explore how this integrated solution empowers you to address common security challenges.

  • Securing branch offices and internet SaaS traffic: Branch offices and roaming users are particularly vulnerable to cyberthreats, especially with the growing adoption of Direct Internet Access (DIA). Our seamless integration extends robust cloud security across your entire SD-WAN fabric, protecting branch offices and users accessing internet and cloud-based applications.
  • Empowering zero-trust security: Our solution requires rigorous verification for every access attempt. This continuous monitoring approach ensures only authorized users and devices gain access to critical resources. By leveraging Cisco segmentation and micro-segmentation capabilities, you can effectively isolate critical network segments and resources, significantly reducing the attack surface and hindering unauthorized access.
  • Rapid deployment: Through the Cisco automation framework, you can rapidly deploy secure connectivity for hundreds or thousands of branch sites to Cisco Secure Access within minutes. This eliminates the need for complex, time-consuming manual configurations.
  • Streamlined customer onboarding: The streamlined purchasing process through the buying tool not only simplifies acquiring licenses but also automatically initiates the creation of tenant spaces tailored for your organization. This pivotal feature represents a significant value-add, seamlessly transitioning customers from the acquisition phase to operational readiness.

The benefits of this integrated SASE solution go beyond just simplifying your security stack, and include:

  • Enhanced security: Elevate protection for internet and SaaS traffic at branch offices, while effortlessly steering traffic for additional security. Benefit from a comprehensive suite of security features, including secure web gateway (SWG), Cloud Access Security Broker (CASB), data loss prevention (DLP), zero trust network access (ZTNA), firewall-as-a-service (FWaaS), and IPS.
  • Meet converged networking and security needs at scale: Deploy robust SASE architectures on top of your existing Catalyst 8000 series routers for high-throughput branch sites.
  • Distributed security enforcement offers tailored security, efficient traffic management, and enhanced protection. It combines on-premises NGFW on the Catalyst 8000 with cloud-based Cisco Secure Access, providing flexibility, scalability, and cost efficiency. This model enables organizations to tailor their security posture to specific needs, offering a robust defense against cyberthreats and empowering them to manage demanding network traffic with strong security measures.
  • Operational efficiency: Simplify security implementation with policy-based routing and automated failover, minimizing complexity and ensuring smooth operation.
  • Enhanced user experience: Deliver consistent, unwavering security for roaming users, regardless of location, for a more seamless user experience.
  • Unparalleled agility: Scale security effortlessly to adapt to your evolving environment, enabling rapid and flexible responses to changing demands.
  • Unmatched network visibility and troubleshooting: Combining Cisco Catalyst SD-WAN, ThousandEyes, and Secure Access delivers exceptional network visibility and troubleshooting capabilities. This powerful integration optimizes traffic flow, enhances digital experience assurance by securing user connections, and ensures robust connectivity across your entire network. Gain a comprehensive view of network health, streamline problem resolution, and create a resilient and efficient digital environment.
  • Always ahead of threats: Leverage the power of Cisco Talos threat intelligence for real-time insights that identify, correlate, and remediate threats at exceptional speed.

Jumpstart your SASE journey with ease


The integrated power of Cisco Catalyst SD-WAN and Cisco Secure Access unlocks a scalable, secure, and simplified path to SASE. This powerful combination, merging the best of networking and security into a single solution delivers a unified experience for both IT and users. Centralized management of your entire network and security posture streamlines operations and simplifies SASE adoption. Additionally, users enjoy unmatched security with consistent protection across the network, regardless of location.

Source: cisco.com

Saturday, 20 April 2024

Cisco Hypershield: Reimagining Security

It is no secret that cybersecurity defenders struggle to keep up with the volume and craftiness of current-day cyber-attacks. A significant reason for the struggle is that security infrastructure has yet to evolve to effectively and efficiently stymie modern attacks. The security infrastructure is either too unwieldy and slow or too destructive. When the security infrastructure is slow and unwieldy, the attackers have likely succeeded by the time the defenders react. When security actions are too drastic, they impair the protected IT systems to such an extent that the actions could be mistaken for the attack itself.

So, what does a defender do? The answer to the defender’s problem is a new security infrastructure — a fabric — that can autonomously create defenses and produce measured responses to detected attacks. Cisco has created such a fabric — Cisco Hypershield — that we discuss in the paragraphs below.

Foundational principles


We start with the foundational principles that guided the creation of Cisco Hypershield. These principles provide the primitives that enable defenders to escape the “damned-if-you-do and damned-if-you-don’t” situation we alluded to above.

Hyper-distributed enforcement

IT infrastructure in a modern enterprise spans privately run data centers (private cloud), public cloud, bring-your-own devices (BYOD) and the Internet of Things (IoT). In such a heterogeneous environment, centralized enforcement is inefficient as traffic must be shuttled to and from the enforcement point. The shuttling creates networking and security design challenges. The answer to this conundrum is the distribution of the enforcement point close to the workload.

Cisco Hypershield comes in multiple enforcement form factors to suit the heterogeneity in any IT environment:

1. Tesseract Security Agent: Here, security software runs on the endpoint server and interacts with the processes and the operating system kernel using the extended Berkeley Packet Filter (eBPF). eBPF is a software framework on modern operating systems that enables programs in user space (in this case, the Tesseract Security Agent) to safely carry out enforcement and monitoring actions via the kernel.
2. Virtual/Container Network Enforcement Point: Here, a software network enforcement point runs inside a virtual machine or container. Such enforcement points are instantiated close to the workload and protect fewer assets than the typical centralized firewall.
3. Server DPUs: Cisco Hypershield’s architecture supports server Data Process Units (DPUs). Thus, in the future, enforcement can be placed on networking hardware close to the workloads by running a hardware-accelerated version of our network enforcement point in these DPUs. The DPUs offload networking and security processing from the server’s main CPU complex in a secure enclave.
4. Smart Switches: Cisco Hypershield’s architecture also supports smart switches. In the future, enforcement will be placed in other Cisco Networking elements, such as top-of-rack smart switches. While not as close to the workload as agents or DPUs, such switches are much closer than a centralized firewall appliance.

Centralized security policy

The usual retort to distributed security enforcement is the nightmare of managing independent security policies per enforcement point. The cure for this problem is the centralization of security policy, which ensures that policy consistency is systematically enforced (see Figure 1).

Cisco Hypershield follows the path of policy centralization. No matter the form factor or location of the enforcement point, the policy being enforced is organized at a central location by Hypershield’s management console. When a new policy is created or an old one is updated, it is “compiled” and intelligently placed on the appropriate enforcement points. Security administrators always have an overview of the deployed policies, no matter the degree of distribution in the enforcement points. Policies are able to follow workloads as they move, for instance, from on-premises to the native public cloud.

Cisco Hypershield: Reimagining Security
Figure 1: Centralized Management for Distributed Enforcement
 
Hitless enforcement point upgrade

The nature of security controls is such that they tend to get outdated quickly. Sometimes, this happens because a new software update has been released. Other times, new applications and business processes force a change in security policy. Traditionally, neither scenario has been accommodated well by enforcement points — both acts can be disruptive to the IT infrastructure and present a business risk that few security administrators want to undertake. A mechanism that makes software and policy updates normal and non-disruptive is called for!

Cisco Hypershield has precisely such a mechanism, called the dual dataplane. This dataplane supports two data paths: a primary (main) and a secondary (shadow). Traffic is replicated between the primary and the secondary. Software updates are first applied to the secondary dataplane, and when fully vetted, the roles of the primary and secondary dataplanes are switched. Similarly, new security policies can be applied first to the secondary dataplane, and when everything looks good, the secondary becomes the primary.

The dual dataplane concept enables security administrators to upgrade enforcement points without fear of business disruption (see Figure 2).

Cisco Hypershield: Reimagining Security
Figure 2: Cisco Hypershield Dual Dataplane 

Complete visibility into workload actions

Complete visibility into a workload’s actions enables the security infrastructure to establish a “fingerprint” for it. Such a fingerprint should include the types of network and file input-output (I/O) that the workload typically performs. When the workload takes an action that falls outside the fingerprint, the security infrastructure should flag it as an anomaly that requires further investigation.

Cisco Hypershield’s Tesseract Security Agent form factor provides complete visibility into a workload’s actions via eBPF, including network packets, file and other system calls and kernel functions. Of course, the agent alerts on anomalous activity when it sees it.

Graduated response to risky workload behavior

Security tools amplify the disruptive capacity of cyber-attacks when they take drastic action on a security alert. Examples of such action include quarantining a workload or the entire application from the network and shutting down the workload or application. For workloads of marginal business importance, drastic action may be fine. However, taking such action for mission-critical applications (for example, a supply chain application for a retailer) often defeats the business rationale for security tools. The disruptive action hurts even more when the security alert turns out to be a false alarm.

Cisco Hypershield in general, and its Tesseract Security Agent in particular, can generate a graduated response. For example, Cisco Hypershield can respond to anomalous traffic with an alert rather than a block when instructed. Similarly, the Tesseract Security Agent can react to a workload, attempting to write to a new file location with a denial rather than shutting down the workload.

Continuous learning from network traffic and workload behavior

Modern-day workloads use services provided by other workloads. These workloads also access many operating system resources such as network and file I/O. Further, applications are composed of multiple workloads. A human security administrator can’t collate all the applications’ activity and establish a baseline. Reestablishing the baseline is even more challenging when new workloads, applications and servers are added to the mix. With this backdrop, manually determining anomalous behavior is impossible. The security infrastructure needs to do this collation and sifting on its own.

Cisco Hypershield has components embedded into each enforcement point that continuously learn the network traffic and workload behavior. The enforcement points periodically aggregate their learning into a centralized repository. Separately, Cisco Hypershield sifts through the centralized repository to establish a baseline for network traffic and workloads’ behavior. Cisco Hypershield also continuously analyzes new data from the enforcement points as the data comes in to determine if recent network traffic and workload behavior is anomalous relative to the baseline.

Autonomous segmentation


Network segmentation has long been a mandated necessity in enterprise networks. Yet, even after decades of investment, many networks remain flat or under-segmented. Cisco Hypershield provides an elegant solution to these problems by combining the primitives mentioned above. The result is a network autonomously segmented under the security administrator’s supervision.

The autonomous segmentation journey proceeds as follows:

  • The security administrator begins with top-level business requirements (such as isolating the production environment from the development environment) to deploy basic guardrail policies.
  • After initial deployment, Cisco Hypershield collects, aggregates, and visualizes network traffic information while running in an “Allow by Default” mode of operation.
  • Once there is sufficient confidence in the functions of the application, we move to “Allow but Alert by Default” and insert the known trusted behaviors of the application as Allow rules above this. The administrator continues to monitor the network traffic information collected by Cisco Hypershield. The monitoring leads to increased familiarity with traffic patterns and the creation of additional common-sense security policies at the administrator’s initiative.
  • Even as the guardrail and common-sense policies are deployed, Cisco Hypershield continues learning the traffic patterns between workloads. As the learning matures, Hypershield makes better (and better) policy recommendations to the administrator.
  • This phased approach allows the administrator to build confidence in the recommendations over time. At the outset, the policies are deployed only to the shadow dataplane. Cisco Hypershield provides performance data on the new policies on the secondary and existing policies on the primary dataplane. If the behavior of the new policies is satisfactory, the administrator moves them in alert-only mode to the primary dataplane. The policies aren’t blocking anything yet, but the administrator can get familiar with the types of flows that would be blocked if they were in blocking mode. Finally, with conviction in the new policies, the administrator turns on blocking mode, progressing towards the enterprise’s segmentation goal.

The administrator’s faith in the security fabric — Cisco Hypershield — deepens after a few successful runs through the segmentation process. Now, the administrator can let the fabric do most of the work, from learning to monitoring to recommendations to deployment. Should there be an adverse business impact, the administrator knows that rollback to a previous set of policies can be accomplished easily via the dual dataplane.

Distributed exploit protection


Patching known vulnerabilities remains an intractable problem given the complex web of events — patch availability, patch compatibility, maintenance windows, testing cycles, and the like — that must transpire to remove the vulnerability. At the same time, new vulnerabilities continue to be discovered at a frenzied pace, and attackers continue to shrink the time between the public release of new vulnerability information and the first exploit. The result is that the attacker’s options towards a successful exploit increase with time.

Cisco Hypershield provides a neat solution to the problem of vulnerability patching. In addition to its built-in vulnerability management capabilities, Hypershield will integrate with Cisco’s and third-party commercial vulnerability management tools. When information on a new vulnerability becomes available, the vulnerability management capability and Hypershield coordinate to check for the vulnerability’s presence in the enterprise’s network.

If an application with a vulnerable workload is found, Cisco Hypershield can protect it from exploits. Cisco Hypershield already has visibility into the affected workload’s interaction with the operating system and the network. At the security administrator’s prompt, Hypershield suggests compensating controls. The controls are a combination of network security policies and operating system restrictions and derive from the learned steady-state behavior of the workload preceding the vulnerability disclosure.

The administrator installs both types of controls in alert-only mode. After a period of testing to build confidence in the controls, the operating system controls are moved to blocking mode. The network controls follow the same trajectory as those in autonomous segmentation. They are first installed on the shadow dataplane, then on the primary dataplane in alert-only mode, and finally converted to blocking mode. At that point, the vulnerable workload is protected from exploits.

During the process described above, the application and the workload continue functioning, and there is no downtime. Of course, the vulnerable workload should eventually be patched if possible. The security fabric enabled by Cisco Hypershield just happens to provide administrators with a robust yet precise tool to fend off exploits, giving the security team time to research and fix the root cause.

Conclusion

In both the examples discussed above, we see Cisco Hypershield function as an effective and efficient security fabric. The innovation powering this fabric is underscored by it launching with several patents pending.

In the case of autonomous segmentation, Hypershield turns flat and under-segmented networks into properly segmented ones. As Hypershield learns more about traffic patterns and security administrators become comfortable with its operations, the segments become tighter, posing more significant hurdles for would-be attackers.

In the case of distributed exploit protection, Hypershield automatically finds and recommends compensating controls. It also provides a smooth and low-risk path to deploying these controls. With the compensating controls in place, the attacker’s window of opportunity between the vulnerability’s disclosure and the software patching effort disappears.

Source: cisco.com

Thursday, 18 April 2024

The Journey: Quantum’s Yellow Brick Road

The Journey: Quantum’s Yellow Brick Road

The world of computing is undergoing a revolution with two powerful forces converging: Quantum Computing (QC) and Generative Artificial Intelligence (GenAI). While GenAI is generating excitement, it’s still finding its footing in real-world applications. Meanwhile, QC is rapidly maturing, offering solutions to complex problems in fields like drug discovery and material science.

This journey, however, isn’t without its challenges. Just like Dorothy and her companions in the Wizard of Oz, we face obstacles along the yellow brick road. This article aims to shed light on these challenges and illuminate a path forward.

From Bits to Qubits: A New Kind of Switch


Traditionally, computers relied on bits, simple switches that are either on (1) or off (0). Quantum computers, on the other hand, utilize qubits. These special switches can be 1, 0, or both at the same time (superposition). This unique property allows them to tackle problems that are impossible or incredibly difficult for traditional computers. Imagine simulating complex molecules for drug discovery or navigating intricate delivery routes – these are just a few examples of what QC excels at.

The Power and Peril of Quantum Supremacy


With great power comes great responsibility and potential danger. In 1994, Peter Shor developed a theoretical model that could break widely used public-key cryptography like RSA, the security system protecting our data. This method leverages the unique properties of qubits, namely superposition, entanglement, and interference, to crack encryption codes. While the exact timeframe is uncertain (estimates range from 3 to 10 years), some experts believe a powerful enough quantum computer could eventually compromise this system.

This vulnerability highlights the “Steal Now, Decrypt Later” (SNDL) strategy employed by some nation-states. They can potentially intercept and store encrypted data now, decrypting it later with a powerful quantum computer. Experts believe SNDL operates like a Man in the Middle attack, where attackers secretly intercept communication and potentially alter data flowing between two parties.

The Intersection of GenAI and Quantum: A Security Challenge


The security concerns extend to GenAI, as well. GenAI models are trained on massive datasets, often containing sensitive information like code, images, or medical records. Currently, this data is secured with RSA-2048 encryption, which could be vulnerable to future quantum computers.

The Yellow Brick Road to Secure Innovation


Imagine a world where GenAI accelerates drug discovery by rapidly simulating millions of potential molecules and interactions. This could revolutionize healthcare, leading to faster cures for life-threatening illnesses. However, the sensitive nature of this data requires the highest level of security. GenAI is our powerful ally, churning out potential drug candidates at an unprecedented rate. But we can’t share this critical data with colleagues or partners for fear of intellectual property theft while that data is being shared. Enter a revolutionary system that combines the power of GenAI with an encryption of Post-Quantum Cryptography (PQC) which is expected to be unbreakable. This “quantum-resistant” approach would allow researchers to collaborate globally, accelerating the path to groundbreaking discoveries.

Benefits

  • Faster Drug Discovery: GenAI acts as a powerful tool, rapidly analyzing vast chemical landscapes. It identifies potential drug candidates and minimizes potential side effects with unprecedented speed, leading to faster development of treatments.
  • Enhanced Collaboration: PQC encryption allows researchers to securely share sensitive data. This fosters global collaboration, accelerating innovation and bringing us closer to achieving medical breakthroughs.
  • Future-Proof Security: Dynamic encryption keys and PQC algorithms ensure the protection of valuable intellectual property from cyberattacks, even from future threats posed by quantum computers and advanced AI.
  • Foundational Cryptography: GenAI and Machine Learning (ML) will become the foundation of secure and adaptable communication systems, giving businesses and governments more control over their cryptography.
  • Zero-Trust Framework: The transition to the post-quantum world is creating a secure, adaptable, and identity-based communication network. This foundation paves the way for a more secure digital landscape.

Challenges

  • GenAI Maturity: While promising, GenAI models are still under development and can generate inaccurate or misleading results. Refining these models requires ongoing research and development to ensure accurate and reliable output.
  • PQC Integration: Integrating PQC algorithms into existing systems can be complex and requires careful planning and testing. This process demands expertise and a strategic approach. NIST is delivering standardized post-quantum algorithms (expected by summer 2024).
  • Standardization: As PQC technology is still evolving, standardization of algorithms and protocols is crucial for seamless adoption. This would ensure that everyone is using compatible systems.
  • Next-Generation Attacks: Previous cryptography standards didn’t require AI-powered defenses.  These new attacks will necessitate the use of AI in encryption and key management, creating an evolving landscape.
  • Orchestration: Cryptography is embedded in almost every electronic device. Managing this requires an orchestration platform that can efficiently manage, monitor, and update encryption across all endpoints.

The Journey Continues: Embrace the Opportunities

The path forward isn’t paved with yellow bricks, but with lines of code, cutting-edge algorithms, and unwavering collaboration. While the challenges may seem daunting, the potential rewards are truly transformative. Here’s how we can embrace the opportunities:

  • Investing in the Future: Continued research and development are crucial. Funding for GenAI development and PQC integration is essential to ensure the accuracy and efficiency of these technologies.
  • Building a Collaborative Ecosystem: Fostering collaboration between researchers, developers, and policymakers is vital. Open-source platforms and knowledge-sharing initiatives will accelerate progress and innovation.
  • Equipping the Workforce: Education and training programs are necessary to equip the workforce with the skills needed to harness the power of GenAI and PQC. This will ensure a smooth transition and maximize the potential of these technologies.
  • A Proactive Approach to Security: Implementing PQC algorithms before quantum supremacy arrives is vital. A proactive approach minimizes the risk of the “Steal Now, Decrypt Later” strategy and safeguards sensitive data.

The convergence of GenAI and QC is not just a technological revolution, it’s a human one. It’s about harnessing our collective ingenuity to solve some of humanity’s most pressing challenges. By embracing the journey, with all its complexities and possibilities, we can pave the way for a golden future that is healthier, more secure, and brimming with innovation.

Source: cisco.com

Saturday, 13 April 2024

Maximize Managed Services: Cisco ThousandEyes Drives MSPs Towards Outstanding Client Experiences

Maximize Managed Services: Cisco ThousandEyes Drives MSPs Towards Outstanding Client Experiences

IT related outages and performance issues can inflict significant financial and operational harm on businesses, especially in critical sectors such as finance, healthcare, and e-commerce. These IT disruptions not only impact productivity potentially costing enterprises billions annually, but also negatively affect end-users through poor experiences like delays and inaccessibility. The vitality of business applications is crucial, as their availability and performance directly influence stakeholder impact, operational continuity, and profitability. Resolving these issues is often complex and labor-intensive. Any system-related downtime or lapse in application performance can ultimately lead to long-term setbacks for an organization.

Typical Troubleshooting Scenario of IT Infrastructure Without ThousandEyes


As soon as an end-user reports an IT related issue, whether it’s a service outage or slow application performance, the formidable challenge of locating and fixing the issue begins. It’s often like searching for a “needle in a haystack.” The troubleshooting journey to uncover the underlying cause of the IT infrastructure related issues typically involves and unfolds with the following challenges:

  • Prolonged Troubleshooting and Finger Pointing – Organizations frequently encounter difficulties in addressing IT outages and performance problems due to limited network visibility and siloed IT teams. This situation fosters a blame culture and hinders collaboration, as teams focus more on debating the cause of issues rather than fixing them, leading to inefficient use of time and resources in Incident Response efforts.
  • Limited End-to-End Visibility – Infrastructure and operations professionals struggle to gain a complete and clear understanding of the end-user experience due to the “black box” nature of the Internet and inadequate traditional monitoring tools. These tools often fail to provide detailed performance data across devices, applications, and the Internet, complicating IT teams’ efforts to pinpoint root causes of issues.
  • Inefficient Resource Allocation – Addressing outages and performance issues consumes significant time and diverts IT resources from strategic initiatives. In-house monitoring systems frequently produce false alerts, misallocating resources and impeding the IT’s capacity to effectively maintain and optimize infrastructure performance.

ThousandEyes tackles these prevalent challenges by presenting an integrated solution with end-to-end visibility into both network infrastructure and application performance. This exceptional level of insight and actionable intelligence enables IT teams to work together with greater synergy, ability to pinpoint and rectify issues, and optimize the deployment of their IT operations resources. The unparalleled abilities of ThousandEyes sets it apart from other platforms by greatly enhancing the visibility and understanding of the components within an environment.

Enhancing Managed Network Services for Client Success


ThousandEyes, a Digital Experience Assurance (DXA) platform, equips organizations with comprehensive insights into user experiences and application performance across the Internet, cloud services, and internal IT infrastructure, thereby streamlining the optimization of essential network-dependent services and applications. This platform can significantly expedite problem resolution and reduce the resources required to address common infrastructure problems by offering the following benefits:

  • Visibility – ThousandEyes provides MSPs with a holistic view that encompasses their clients’ internal networks as well as external networks, cloud services, and SaaS platforms. This end-to-end visibility allows MSPs to oversee and address issues throughout the entire digital supply chain, from core infrastructure to the application level. With this extensive coverage, MSPs are equipped to quickly locate the source of any issue across the network spectrum, thereby shortening the time required to identify problems.
  • Troubleshooting – ThousandEyes streamlines the troubleshooting process by swiftly pinpointing the root causes of infrastructure related issues, whether they occur within the enterprise network or are due to external factors like ISPs, cloud providers, or SaaS applications. The platform fosters collaboration among IT teams by providing a unified data set, which helps eliminate finger-pointing and accelerates problem-solving, thereby significantly reducing the time required to resolve issues.
  • Digital Experience Assurance – ThousandEyes conducts comprehensive performance monitoring by tracking key network metrics like latency, packet loss, and jitter, along with application-level metrics that shed light on the user experience and the performance of web and API transactions. Additionally, the platform enhances DXA by simulating user transactions and scrutinizing the data pathways to end-users, ensuring that both customers and/or employees have effective access to business applications.
  • Alerting and Reporting – ThousandEyes enhances proactive IT management by providing intelligent alerting and comprehensive reporting. Users are notified of performance degradation in real-time and can access historical data and trend analysis for informed decision-making. This proactive alerting capability allows IT teams to identify and address anomalies early, potentially reducing the frequency and severity of incidents therefore minimizing their impact.
  • Optimization – Organizations can optimize network performance and enhance user experience by leveraging insights from both historical and real-time data on application and service delivery paths. This comprehensive understanding enables informed decision-making that not only addresses current performance issues but also helps prevent future ones, ultimately conserving time and resources.

ThousandEyes enhances organizational capability to deliver high-quality digital services through valuable insights and analytics, which strengthen network management capabilities and facilitate more effective decision-making and issue resolution. Although the extent of benefits or efficiency gains varies across different organizations, users commonly report marked improvements after implementing ThousandEyes, with some noting up to a 75% faster resolution of network problems and fewer outages and performance issues. Customers have reported substantial reductions in troubleshooting times, with tasks that previously took hours or days being cut down to mere minutes, thanks to ThousandEyes.

ThousandEyes Enhances the Service Offerings of MSPs, Greatly Improving the Overall Experience


Managed Service Providers can enhance their client’s network management and optimization by leveraging the following benefits of Cisco ThousandEyes:

  • Improved Service Level Agreements (SLAs): With detailed insights into network performance and the ability to quickly identify and resolve issues, MSPs can better adhere to, or further enhance their SLAs. This helps in maintaining a high level of service and can distinguish the MSP’s offerings in a competitive market.
  • Proactive Problem Resolution: ThousandEyes’ alerting system can notify MSPs of potential issues before they affect end-users. This proactive approach minimizes downtime and can help MSPs address problems before clients are even aware of them.
  • Enhanced Customer Experience: By ensuring that applications and services are running smoothly, MSPs can contribute to a better end-user experience for their clients’ customers. This is particularly important for customer-facing applications where performance directly impacts revenue and brand reputation.
  • Efficient Troubleshooting: With the comprehensive network telemetry from ThousandEyes, MSPs can swiftly identify the root cause, whether they stem from the client’s internal network, an ISP, or various cloud-based services. This capability decreases the average time required to resolve issues.
  • Data-Driven Decisions: The data collected by ThousandEyes can inform strategic decisions about network design, capacity planning, and performance optimization. MSPs can use this information to advise clients on how to improve their IT infrastructures.
  • Reporting and Communication: MSPs can use the detailed reports and visualizations provided by ThousandEyes to effectively communicate with clients about network health, ongoing issues, and resolved problems, enhancing transparency and trust.

ThousandEyes: Your Shortcut to Advanced Network Visibility


ThousandEyes simplifies deployment with its cloud-based SaaS model. It integrates smoothly into diverse environments using versatile agents, including the specialized Enterprise Agents—robust, dedicated monitoring nodes that provide deeper network insights. These Enterprise Agents can be deployed on-premises in data centers, within private clouds, or even across public cloud platforms like AWS, Google Cloud, and Azure to monitor network and application performance. Additionally, ThousandEyes provides a browser extension designed for monitoring of user experience. Furthermore, its compatibility with Cisco and Meraki infrastructure streamlines integration, facilitating easy embedding into current deployments. The straightforward web management interface streamlines configuration, and the platform’s API accessibility supports automation, making ThousandEyes a highly adaptable and effortless choice for comprehensive network visibility.

MSPs Can Now Leverage Consumption-Based Licensing for ThousandEyes


In addition to traditional Enterprise Agreement licensing vehicles, ThousandEyes is now available through the Cisco Managed Services Licensing Agreement (MSLA), a program that was designed to meet the specific requirements of MSPs. This consumption-based licensing model is flexible and scalable, fitting the service-based business models of MSPs by allowing them to pay based on consumption. The MSLA program allows MSPs to adjust their ThousandEyes usage without complex contract changes, facilitating quick adaptation to evolving market demands.

MSPs and Their Clients Can Garner Significant Return on Investment


The integration of ThousandEyes by MSPs leads to a worthwhile ROI and an enhancement of their managed service offerings, providing benefits for both the providers and their clients. MSPs experience a marked improvement in their ability to offer advanced network visibility, comprehensive performance monitoring, and proactive issue resolution. These capabilities result in elevated service quality and increased customer satisfaction. End users reap the rewards of more reliable and efficient network services, experiencing fewer disruptions and thus less impact on their business operations. Moreover, the operational efficiencies introduced by ThousandEyes help reduce costs and free up valuable resources, enabling MSPs to focus more on business expansion and continued services improvement. In a time when digital transformation and dependency on Internet and cloud services are growing, having complete network visibility is essential. ThousandEyes is critical in this landscape, acting as a GPS for the digital world, offering insights and guidance for effective and efficient navigation.

Source: cisco.com