Showing posts with label Cisco Data Center. Show all posts
Showing posts with label Cisco Data Center. Show all posts

Saturday, 3 August 2024

Unlock the Potential of AI/ML Workloads with Cisco Data Center Networks

Harnessing data is crucial for success in today’s data-driven world, and the surge in AI/ML workloads is accelerating the need for data centers that can deliver it with operational simplicity. While 84% of companies think AI will have a significant impact on their business, just 14% of organizations worldwide say they are fully ready to integrate AI into their business, according to the Cisco AI Readiness Index.


The rapid adoption of large language models (LLMs) trained on huge data sets has introduced production environment management complexities. What’s needed is a data center strategy that embraces agility, elasticity, and cognitive intelligence capabilities for more performance and future sustainability.

Impact of AI on businesses and data centers


While AI continues to drive growth, reshape priorities, and accelerate operations, organizations often grapple with three key challenges:

◉ How do they modernize data center networks to handle evolving needs, particularly AI workloads?
◉ How can they scale infrastructure for AI/ML clusters with a sustainable paradigm?
◉ How can they ensure end-to-end visibility and security of the data center infrastructure?

Unlock the Potential of AI/ML Workloads with Cisco Data Center Networks
Figure 1: Key network challenges for AI/ML requirements

While AI visibility and observability are essential for supporting AI/ML applications in production, challenges remain. There’s still no universal agreement on what metrics to monitor or optimal monitoring practices. Furthermore, defining roles for monitoring and the best organizational models for ML deployments remain ongoing discussions for most organizations. With data and data centers everywhere, using IPsec or similar services for security is imperative in distributed data center environments with colocation or edge sites, encrypted connectivity, and traffic between sites and clouds.

AI workloads, whether utilizing inferencing or retrieval-augmented generation (RAG), require distributed and edge data centers with robust infrastructure for processing, securing, and connectivity. For secure communications between multiple sites—whether private or public cloud—enabling encryption is key for GPU-to-GPU, application-to-application, or traditional workload to AI workload interactions. Advances in networking are warranted to meet this need.

Cisco’s AI/ML approach revolutionizes data center networking


At Cisco Live 2024, we announced several advancements in data center networking, particularly for AI/ML applications. This includes a Cisco Nexus One Fabric Experience that simplifies configuration, monitoring, and maintenance for all fabric types through a single control point, Cisco Nexus Dashboard. This solution streamlines management across diverse data center needs with unified policies, reducing complexity and improving security. Additionally, Nexus HyperFabric has expanded the Cisco Nexus portfolio with an easy-to-deploy as-a-service approach to augment our private cloud offering.

Unlock the Potential of AI/ML Workloads with Cisco Data Center Networks
Figure 2: Why the time is now for AI/ML in enterprises

Nexus Dashboard consolidates services, creating a more user-friendly experience that streamlines software installation and upgrades while requiring fewer IT resources. It also serves as a comprehensive operations and automation platform for on-premises data center networks, offering valuable features such as network visualizations, faster deployments, switch-level energy management, and AI-powered root cause analysis for swift performance troubleshooting.

As new buildouts that are focused on supporting AI workloads and associated data trust domains continue to accelerate, much of the network focus has justifiably been on the physical infrastructure and the ability to build a non-blocking, low-latency lossless Ethernet. Ethernet’s ubiquity, component reliability, and superior cost economics will continue to lead the way with 800G and a roadmap to 1.6T.

Unlock the Potential of AI/ML Workloads with Cisco Data Center Networks
Figure 3: Cisco’s AI/ML approach

By enabling the right congestion management mechanisms, telemetry capabilities, ports speeds, and latency, operators can build out AI-focused clusters. Our customers are already telling us that the discussion is moving quickly towards fitting these clusters into their existing operating model to scale their management paradigm. That’s why it is essential to also innovate around simplifying the operator experience with new AIOps capabilities.

With our Cisco Validated Designs (CVDs), we offer preconfigured solutions optimized for AI/ML workloads to help ensure that the network meets the specific infrastructure requirements of AI/ML clusters, minimizing latency and packet drops for seamless dataflow and more efficient job completion.

Unlock the Potential of AI/ML Workloads with Cisco Data Center Networks
Figure 4: Lossless network with Uniform Traffic Distribution

Protect and connect both traditional workloads and new AI workloads in a single data center environment (edge, colocation, public or private cloud) that exceeds customer requirements for reliability, performance, operational simplicity, and sustainability. We are focused on delivering operational simplicity and networking innovations such as seamless local area network (LAN), storage area network (SAN), AI/ML, and Cisco IP Fabric for Media (IPFM) implementations. In turn, you can unlock new use cases and greater value creation.

Source: cisco.com

Thursday, 2 May 2024

Computing that’s purpose-built for a more energy-efficient, AI-driven future

Computing that’s purpose-built for a more energy-efficient, AI-driven future

Just as humans use patterns as mental shortcuts for solving complex problems, AI is about recognizing patterns to distill actionable insights. Now think about how this applies to the data center, where patterns have developed over decades. You have cycles where we use software to solve problems, then hardware innovations enable new software to focus on the next problem. The pendulum swings back and forth repeatedly, with each swing representing a disruptive technology that changes and redefines how we get work done with our developers and with data center infrastructure and operations teams.

AI is clearly the latest pendulum swing and disruptive technology that requires advancements in both hardware and software. GPUs are all the rage today due to the public debut of ChatGPT – but GPUs have been around for a long time. I was a GPU user back in the 1990s because these powerful chips enabled me to play 3D games that required fast processing to calculate things like where all those polygons should be in space, updating visuals fast with each frame.

In technical terms, GPUs can process many parallel floating-point operations faster than standard CPUs and in large part that is their superpower. It’s worth noting that many AI workloads can be optimized to run on a high-performance CPU.  But unlike the CPU, GPUs are free from the responsibility of making all the other subsystems within compute work with each other. Software developers and data scientists can leverage software like CUDA and its development tools to harness the power of GPUs and use all that parallel processing capability to solve some of the world’s most complex problems.

A new way to look at your AI needs


Unlike single, heterogenous infrastructure use cases like virtualization, there are multiple patterns within AI that come with different infrastructure needs in the data center. Organizations can think about AI use cases in terms of three main buckets:

1. Build the model, for large foundational training.
2. Optimize the model, for fine-tuning a pre-trained model with specific data sets.
3. Use the model, for inferencing insights from new data.

The least demanding workloads are optimize and use the model because most of the work can be done in a single box with multiple GPUs. The most intensive, disruptive, and expensive workload is build the model. In general, if you’re looking to train these models at scale you need an environment that can support many GPUs across many servers, networking together for individual GPUs that behave as a single processing unit to solve highly complex problems, faster.

This makes the network critical for training use cases and introduces all kinds of challenges to data center infrastructure and operations, especially if the underlying facility was not built for AI from inception. And most organizations today are not looking to build new data centers.

Therefore, organizations building out their AI data center strategies will have to answer important questions like:

  • What AI use cases do you need to support, and based on the business outcomes you need to deliver, where do they fall into the build the model, optimize the model, and use the model buckets?
  • Where is the data you need, and where is the best location to enable these use cases to optimize outcomes and minimize the costs?
  • Do you need to deliver more power? Are your facilities able to cool these types of workloads with existing methods or do you require new methods like water cooling?
  • Finally, what is the impact on your organization’s sustainability goals?

The power of Cisco Compute solutions for AI


As the general manager and senior vice president for Cisco’s compute business, I’m happy to say that Cisco UCS servers are designed for demanding use cases like AI fine-tuning and inferencing, VDI, and many others. With its future-ready, highly modular architecture, Cisco UCS empowers our customers with a blend of high-performance CPUs, optional GPU acceleration, and software-defined automation. This translates to efficient resource allocation for diverse workloads and streamlined management through Cisco Intersight. You can say that with UCS, you get the muscle to power your creativity and the brains to optimize its use for groundbreaking AI use cases.

But Cisco is one player in a wide ecosystem. Technology and solution partners have long been a key to our success, and this is certainly no different in our strategy for AI. This strategy revolves around driving maximum customer value to harness the full long-term potential behind each partnership, which enables us to combine the best of compute and networking with the best tools in AI.

This is the case in our strategic partnerships with NVIDIA, Intel, AMD, Red Hat, and others. One key deliverable has been the steady stream of Cisco Validated Designs (CVDs) that provide pre-configured solution blueprints that simplify integrating AI workloads into existing IT infrastructure. CVDs eliminate the need for our customers to build their AI infrastructure from scratch. This translates to faster deployment times and reduced risks associated with complex infrastructure configurations and deployments.

Computing that’s purpose-built for a more energy-efficient, AI-driven future

Another key pillar of our AI computing strategy is offering customers a diversity of solution options that include standalone blade and rack-based servers, converged infrastructure, and hyperconverged infrastructure (HCI). These options enable customers to address a variety of use cases and deployment domains throughout their hybrid multicloud environments – from centralized data centers to edge end points. Here are just a couple of examples:

  • Converged infrastructures with partners like NetApp and Pure Storage offer a strong foundation for the full lifecycle of AI development from training AI models to day-to-day operations of AI workloads in production environments. For highly demanding AI use cases like scientific research or complex financial simulations, our converged infrastructures can be customized and upgraded to provide the scalability and flexibility needed to handle these computationally intensive workloads efficiently.
  • We also offer an HCI option through our strategic partnership with Nutanix that is well-suited for hybrid and multi-cloud environments through the cloud-native designs of Nutanix solutions. This allows our customers to seamlessly extend their AI workloads across on-premises infrastructure and public cloud resources, for optimal performance and cost efficiency. This solution is also ideal for edge deployments, where real-time data processing is crucial.

AI Infrastructure with sustainability in mind 


Cisco’s engineering teams are focused on embedding energy management, software and hardware sustainability, and business model transformation into everything we do. Together with energy optimization, these new innovations will have the potential to help more customers accelerate their sustainability goals.

Working in tandem with engineering teams across Cisco, Denise Lee leads Cisco’s Engineering Sustainability Office with a mission to deliver more sustainable products and solutions to our customers and partners. With electricity usage from data centers, AI, and the cryptocurrency sector potentially doubling by 2026, according to a recent International Energy Agency report, we are at a pivotal moment where AI, data centers, and energy efficiency must come together. AI data center ecosystems must be designed with sustainability in mind. Denise outlined the systems design thinking that highlights the opportunities for data center energy efficiency across performance, cooling, and power in her recent blog, Reimagine Your Data Center for Responsible AI Deployments.

Recognition for Cisco’s efforts have already begun. Cisco’s UCS X-series has received the Sustainable Product of the Year by SEAL Awards and an Energy Star rating from the U.S. Environmental Protection Agency. And Cisco continues to focus on critical features in our portfolio through agreement on product sustainability requirements to address the demands on data centers in the years ahead.

Look ahead to Cisco Live


We are just a couple of months away from Cisco Live US, our premier customer event and showcase for the many different and exciting innovations from Cisco and our technology and solution partners. We will be sharing many exciting Cisco Compute solutions for AI and other uses cases. Our Sustainability Zone will feature a virtual tour through a modernized Cisco data center where you can learn about Cisco compute technologies and their sustainability benefits. I’ll share more details in my next blog closer to the event.

Source: cisco.com

Saturday, 27 April 2024

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

In the highly dynamic and ever-evolving world of enterprise computing, data centers serve as the backbones of operations, driving the need for powerful, scalable, and energy-efficient server solutions. As businesses continuously strive to refine their IT ecosystems, recognizing and capitalizing on data center energy-saving attributes and design innovations is essential for fostering sustainable development and maximizing operational efficiency and effectiveness.

Cisco’s Unified Computing System (UCS) stands at the forefront of this technological landscape, offering a comprehensive portfolio of server options tailored to meet the most diverse of requirements. Each component of the UCS family, including the B-Series, C-Series, HyperFlex, and X-Series, is designed with energy efficiency in mind, delivering performance while mitigating energy use. Energy efficiency is a major consideration, starting from the beginning of the planning and design phases of these technologies and products all the way through into each update.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

The UCS Blade Servers and Chassis (B-Series) provide a harmonious blend of integration and dense computing power, while the UCS Rack-Mount Servers (C-Series) offer versatility and incremental scalability. These offerings are complemented by Cisco’s UCS HyperFlex Systems, the next-generation of hyper-converged infrastructure that brings compute, storage, and networking into a cohesive, highly efficient platform. Furthermore, the UCS X-Series takes flexibility and efficiency to new heights with its modular, future-proof architecture.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS B-Series Blade Chassis and Servers

The Cisco UCS B-Series Blade Chassis and Servers offer several features and design elements that contribute to greater energy efficiency compared to traditional blade server chassis. The following components and functions of UCS contribute to this efficiency:

1. Unified Design: Cisco UCS incorporates a unified system that integrates computing, networking, storage access, and virtualization resources into a single, cohesive architecture. This integration reduces the number of physical components needed, leading to lower power consumption compared to traditional setups where these elements are usually separate and require additional power.

2. Power Management: UCS includes sophisticated power management capabilities at both the hardware and software levels. This enables dynamic power allocation based on workload demands, allowing unused resources to be powered down or put into a low-power state. By adjusting power usage according to actual requirements, the wasting of energy is minimized.

3. Efficient Cooling: The blade servers and chassis are designed to optimize airflow and cooling efficiency. This reduces the need for excessive cooling, which can be a significant contributor to energy consumption in data centers. By efficiently managing airflow and cooling, Cisco UCS helps minimize the overall energy required for server operation.

4. Higher Density: UCS Blade Series Chassis typically support higher server densities compared to traditional blade server chassis. By consolidating more computing power into a smaller physical footprint, organizations can achieve greater efficiency in terms of space utilization, power consumption, and cooling requirements.

5. Virtualization Support: Cisco UCS is designed to work seamlessly with virtualization technologies such as VMware, Microsoft Hyper-V, and others. Virtualization allows for better utilization of server resources by running multiple virtual machines (VMs) on a single physical server. This consolidation reduces the total number of servers needed, thereby lowering energy consumption across the data center.

6. Power capping and monitoring: UCS provides features for power capping and monitoring, allowing administrators to set maximum power limits for individual servers or groups of servers. This helps prevent power spikes and ensures that power usage remains within predefined thresholds, thus optimizing energy efficiency.

7. Efficient Hardware Components: UCS incorporates energy-efficient hardware components such as processors, memory modules, and power supplies. These components are designed to deliver high performance while minimizing power consumption, contributing to overall energy efficiency.

Cisco UCS Blade Series Chassis and Servers facilitate greater energy efficiency through a combination of unified design, power management capabilities, efficient cooling, higher physical density, support for virtualization, and the use of energy-efficient hardware components. By leveraging these features, organizations can reduce their overall energy consumption and operational costs in the data center.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS C-Series Rack Servers

Cisco UCS C-Series Rack Servers are standalone servers that tend to be more flexible in terms of deployment and may be easier to cool individually. They are often more efficient in environments where fewer servers are required or when full utilization of a blade chassis is not possible. In such cases, deploying a few rack servers can be more energy-efficient than powering a partially empty blade chassis.

The Cisco UCS Rack Servers, like the Blade Series, have been designed with energy efficiency in mind. The following aspects contribute to the energy efficiency of UCS Rack Servers:

1. Modular Design: UCS Rack Servers are built with a modular design that allows for easy expansion and servicing. This means that components can be added or replaced as needed without unnecessary wasting resources.

2. Component Efficiency: Like the Blade Series, UCS Rack Servers use high-efficiency power supplies, voltage regulators, and cooling fans. These components are chosen for their ability to deliver performance while minimizing energy consumption.

3. Thermal Design: The physical design of the UCS Rack Servers helps to optimize airflow, which can reduce the need for excessive cooling. Proper thermal management ensures that the servers maintain an optimal operating temperature, which contributes to energy savings.

4. Advanced CPUs: UCS Rack Servers are equipped with the latest processors that offer a balance between performance and power usage. These CPUs often include features that reduce power consumption when full performance is not required.

5. Energy Star Certification: Many UCS Rack Servers are Energy Star certified, meaning they meet strict energy efficiency guidelines set by the U.S. Environmental Protection Agency.

6. Management Software: Cisco’s management software allows for detailed monitoring and control of power usage across UCS Rack Servers. This software can help identify underutilized resources and optimize power settings based on the workload.

Cisco UCS Rack Servers are designed with energy efficiency as a core principle. They feature a modular design that enables easy expansion and servicing, high-efficiency components such as power supplies and cooling fans, and processors that balance performance with power consumption. The thermal design of these rack servers optimizes airflow, contributing to reduced cooling needs.

Additionally, many UCS Rack Servers have earned Energy Star certification, indicating compliance with stringent energy efficiency guidelines. Management software further enhances energy savings by allowing detailed monitoring and control over power usage, ensuring that resources are optimized according to workload demands. These factors make UCS Rack Servers a suitable choice for data centers focused on minimizing energy consumption while maintaining high performance.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS S-Series Storage Servers

The Cisco UCS S-Series servers are engineered to offer high-density storage solutions with scalability, which leads to considerable energy efficiency benefits when compared to the UCS B-Series blade servers and C-Series rack servers. The B-Series focuses on optimizing compute density and network integration in a blade server form factor, while the C-Series provides versatile rack-mount server solutions. In contrast, the S-Series emphasizes storage density and capacity.

Each series has its unique design optimizations; however, the S-Series can often consolidate storage and compute resources more effectively, potentially reducing the overall energy footprint by minimizing the need for additional servers and standalone storage units. This consolidation is a key factor in achieving greater energy efficiency within data centers.

The UCS S-Series servers incorporate the following features that contribute to energy efficiency:

  1. Efficient Hardware Components: Similar to other Cisco UCS servers, the UCS S-Series servers utilize energy-efficient hardware components such as processors, memory modules, and power supplies. These components are designed to provide high performance while minimizing power consumption, thereby improving energy efficiency.
  2. Scalability and Flexibility: S-Series servers are highly scalable and offer flexible configurations to meet diverse workload requirements. This scalability allows engineers to right-size their infrastructure and avoid over-provisioning, which often leads to wasteful energy consumption.
  3. Storage Optimization: UCS S-Series servers are optimized for storage-intensive workloads by offering high-density storage options within a compact form factor. With consolidated storage resources via fewer physical devices, organizations can reduce power consumption associated with managing and powering multiple storage systems.
  4. Power Management Features: S-Series servers incorporate power management features similar to other UCS servers, allowing administrators to monitor and control power usage at both the server and chassis levels. These features enable organizations to optimize power consumption based on workload demands, reducing energy waste.
  5. Unified Management: UCS S-Series servers are part of the Cisco Unified Computing System, which provides unified management capabilities for the entire infrastructure, including compute, storage, and networking components. This centralized management approach helps administrators efficiently monitor and optimize energy usage across the data center.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS HyperFlex HX-Series Servers

The Cisco HyperFlex HX-Series represents a fully integrated and hyperconverged infrastructure system that combines computing, storage, and networking into a simplified, scalable, and high-performance architecture designed to handle a wide array of workloads and applications.

When it comes to energy efficiency, the HyperFlex HX-Series stands out by further consolidating data center functions and streamlining resource management compared to the traditional UCS B-Series, C-Series, and S-Series. Unlike the B-Series blade servers which prioritize compute density, the C-Series rack servers which offer flexibility, or the S-Series storage servers which focus on high-density storage, the HX-Series incorporates all of these aspects into a cohesive unit. By doing so, it reduces the need for separate storage and compute layers, leading to potentially lower power and cooling requirements.

The integration inherent in hyperconverged infrastructure, such as the HX-Series, often results in higher efficiency and a smaller energy footprint as it reduces the number of physical components required, maximizes resource utilization, and optimizes workload distribution; all of this contributes to a more energy-conscious data center environment.

The HyperFlex can contribute to energy efficiency in the following ways:

  1. Consolidation of Resources: HyperFlex integrates compute, storage, and networking resources into a single platform, eliminating the need for separate hardware components such as standalone servers, storage arrays, and networking switches. By consolidating these resources, organizations can reduce overall power consumption when compared to traditional infrastructure setups that require separate instances of these components.
  2. Efficient Hardware Components: HyperFlex HX-Series Servers are designed to incorporate energy-efficient hardware components such as processors, memory modules, and power supplies. These components are optimized for performance, per watt, helping to minimize power consumption while delivering the necessary robust compute and storage capabilities.
  3. Dynamic Resource Allocation: HyperFlex platforms often include features for dynamic resource allocation and optimization. This may include technologies such as VMware Distributed Resource Scheduler (DRS) or Cisco Intersight Workload Optimizer, which intelligently distribute workloads across the infrastructure to maximize resource utilization and minimize energy waste.
  4. Software-Defined Storage Efficiency: HyperFlex utilizes software-defined storage (SDS) technology, which allows for more efficient use of storage resources compared to traditional storage solutions. Features such as deduplication, compression, and thin provisioning help to reduce the overall storage footprint, resulting in lower power consumption associated with storage devices.
  5. Integrated Management and Automation: HyperFlex platforms typically include centralized management and automation capabilities that enable administrators to efficiently monitor and control the entire infrastructure from a single interface. This combined integration management approach can streamline operations, optimize resource usage, and identify opportunities for energy saving.
  6. Scalability and Right-Sizing: HyperFlex allows organizations to scale resources incrementally by adding additional server nodes to the cluster as needed. This scalability enables organizations to custom fit their infrastructure and avoid over-provisioning, which can lead to unnecessary energy consumption.
  7. Efficient Cooling Design: HyperFlex systems are designed with extreme consideration for efficient cooling to maintain optimal operating temperatures for the hardware components. By optimizing airflow and cooling mechanisms within the infrastructure, HyperFlex helps minimize energy consumption associated with cooling systems.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)
Cisco UCS X-Series Modular System

The Cisco UCS X-Series is a versatile and innovative computing platform that elevates the concept of a modular system to new heights, offering a flexible, future-ready solution for the modern data center. It stands apart from the traditional UCS B-Series blade servers, C-Series rack servers, S-Series storage servers, and even the integrated HyperFlex HX-Series hyperconverged systems, in that it provides a unique blend of adaptability and scalability. The X-Series is designed with a composable infrastructure that allows dynamic reconfiguration of computing, storage, and I/O resources to match specific workload requirements.

In terms of energy efficiency, the UCS X-Series is engineered to streamline power usage by dynamically adapting to the demands of various applications. It achieves this through a technology that allows components to be powered on and off independently, which can lead to significant energy savings compared to the always-on nature of B-Series and C-Series servers. While the S-Series servers are optimized for high-density storage, the X-Series can reduce the need for separate high-capacity storage systems by incorporating storage elements directly into its composable framework. Furthermore, compared to the HyperFlex HX-Series, the UCS X-Series may offer even more granular control over resource allocation, potentially leading to even better energy management and waste reduction.

The UCS X-Series platform aims to set a new standard for sustainability by optimizing power consumption across diverse workloads, minimizing the environmental impact, and lowering the total cost of ownership (TCO) through improved energy efficiency. By intelligently consolidating and optimizing resources, the X-Series promises, and has proven to be, a forward-looking solution that responds to the growing need for eco-friendly and cost-effective data center operations.

The Cisco UCS X-Series can contribute to energy efficiency in the following ways:

  1. Integrated Architecture: Cisco UCS X-Series combines compute, storage, and networking into a unified system, reducing the need for seperate components. This consolidation leads to lower overall energy consumption compared to traditional data center architectures.
  2. Energy-Efficient Components: The UCS X-Series is built with the latest energy-efficient technologies; CPUs, memory modules, and power supplies in the X-Series are selected for their performance-to-power consumption ratio, ensuring that energy use is optimized without sacrificing performance.
  3. Intelligent Workload Placement: Cisco UCS X-Series can utilize Cisco Intersight and other intelligent resource management tools to distribute workloads intelligently and efficiently across available resources, optimizing power usage and reducing unnecessary energy expenditure.
  4. Software-Defined Storage Benefits: The X-Series can leverage software-defined storage which often includes features like deduplication, compression, and thin provisioning to make storage operations more efficient and reduce the energy needed for data storage.
  5. Automated Management: With Cisco Intersight, the X-Series provides automated management and orchestration across the infrastructure, helping to streamline operations, reduce manual intervention, and cut down on energy usage through improved allocation of resources.
  6. Scalable Infrastructure: The modular design of the UCS X-Series allows for easy scalability, thus allowing organizations to add resources only as needed. This helps prevent over-provisioning and the energy costs associated with idle equipment.
  7. Optimized Cooling: The X-Series chassis is designed with cooling efficiency in mind, using advanced airflow management and heat sinks to keep components at optimal temperatures. This reduces the amount of energy needed for cooling infrastructure.

Mindful energy consumption without compromise


Cisco’s UCS offers a robust and diverse suite of server solutions, each engineered to address the specific demands of modern-day data centers with a sharp focus on energy efficiency. The UCS B-Series and C-Series each bring distinct advantages in terms of integration, computing density, and flexible scalability, while the S-Series specializes in high-density storage capabilities. The HyperFlex HX-Series advances the convergence of compute, storage, and networking, streamlining data center operations and energy consumption. Finally, the UCS X-Series represents the pinnacle of modularity and future-proof design, delivering unparalleled flexibility to dynamically meet the shifting demands of enterprise workloads.

Across this entire portfolio, from the B-Series to the X-Series, Cisco has infused an ethos of sustainability, incorporating energy-efficient hardware, advanced power management, and intelligent cooling designs. By optimizing the use of resources, embracing virtualization, and enabling scalable, granular infrastructure deployments, Cisco’s UCS platforms are not just powerful computing solutions but are also catalysts for energy-conscious, cost-effective, and environmentally responsible data center operations.

For organizations navigating the complexities of digital transformation while balancing operational efficiency with the goal of sustainability, the Cisco UCS lineup stands ready to deliver performance that powers growth without compromising on our commitment to a greener future.

Experience Eco-Friendly Data Center Efficiency with Cisco’s Unified Computing System (UCS)

Tuesday, 9 May 2023

Disaster Recovery Solutions for the Edge with HyperFlex and Cohesity

The edge computing architecture comes with a variety of benefits. Placement of compute, storage, and network resources close to the location at which data is being generated typically improves response times and may reduce WAN based network traffic between an Edge site and central data center. This stated the distributed nature of edge site architectures also introduces several challenges related to data protection and disaster recovery. One requirement is performing local backups with the ability to conduct local recovery operations. Another formidable challenge involves edge site disaster recovery. Planning for the inevitable edge site outage, be it temporary, elongated, or permanent is the problem this blog takes a deeper look into.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning

Business continuity planning focuses on items such as Recovery Point Objective (RPO) and Recovery Time Objective (RTO). These measurements are generally expressed in terms of a Service Level Agreement (SLA). Under the covers exists a collection of infrastructure building blocks that make adherence to an SLA possible. In simplistic terms, the building blocks include the ability to perform backups, the ability to create additional copies of backups, provide a methodology to transport backup copies to remote locations (replication), an intuitive management interface, and connects to a preconfigured recovery infrastructure.

From an operational standpoint, an edge site disaster recovery solution includes workflows that enable the ability to:

◉ Perform workload failover from an edge site to a central site.
◉ Protect failed over workload at a central site.
◉ Reverse replicate protected workloads from a central site back to an edge site at the point where the edge site is ready to receive inbound replication traffic.
◉ Failover again such that the edge site once again hosts production workloads.
◉ Test these operations without impacting production workloads.

Should an edge site failure or outage occur, workload failover to a disaster recovery site may become necessary. (Quite obviously, disaster recovery operations should be tested on an ongoing basis rather than just hoping things will work.) At the point where workload failover has been completed successfully, the failed over workload requires data protection. At the point where the edge site has been returned to an operational state, backup copies should be replicated back to the edge site. Alternatively, a new or different edge site may replace the original edge site. At some point, workload transition from the central site back to the edge site will occur.

HyperFlex with Cohesity Data Protect


Cohesity provides a number of DataProtect solutions to assist users in meeting data protection and disaster recovery business requirements. The Cohesity DataProtect product is available as a Virtual Edition and can be deployed as a single virtual machine hosted on a HyperFlex Edge cluster. A predefined small or large configuration is available for selection when the product is installed. The Cohesity DataProtect solution is also available in a ROBO Edition, running on a single Cisco UCS server.

Cohesity DataProtect edge solutions provide local protection of virtual machine workloads and can also replicate local backups to a larger centralized Cohesity cluster deployed on Cisco UCS servers.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Certification, Cisco Guides, Cisco Learning

Cohesity protection groups are configured and define the workloads to protect. Protection groups also include a policy that defines the frequency and retention period for local backups. The policy also defines a replication destination, replication frequency, as well as the retention period for replicated backups.

In summary, Cisco HyperFlex with Cohesity DataProtect has built-in workflows that enable easy workload failover and failover testing. At the point where reverse replication can be initiated, a simple policy modification is all that is required. Cohesity also features Helios, a centralized management facility that enables the entire solution to be managed from a single web-based console.

Source: cisco.com

Tuesday, 7 March 2023

ACI Segmentation and Migrations made easier with Endpoint Security Groups (ESG)

Let’s open with a question: “How are you handling security and segmentation requirements in your Cisco Application Centric Infrastructure (ACI) fabric?”

I expect most answers will relate to constructs of Endpoint Groups (EPGs), contracts and filters.  These concepts are the foundations of ACI. But when it comes to any infrastructure capabilities, designs and customers’ requirements are constantly evolving, often leading to new segmentation challenges. That is why I would like to introduce a relatively recent, powerful option called Endpoint Security Groups (ESGs). Although ESGs were introduced in Cisco ACI a while back (version 5.0(1) released in May 2020), there is still ample opportunity to spread this functionality to a broader audience.

For those who have not explored the topic yet, ESGs offer an alternate way of handling segmentation with the added flexibility of decoupling this from the earlier concepts of forwarding and security associated with Endpoint Groups. This is to say that ESGs handle segmentation separately from the forwarding aspects, allowing more flexibility and possibility with each.

EPG and ESG – Highlights and Differences


The easiest way to manage endpoints with common security requirements is to put them into groups and control communication between them. In ACI, these groups have been traditionally represented by EPGs. Contracts that are attached to EPGs are used for controlling communication and other policies between groups with different postures. Although EPG has been primarily providing network security, it must be married to a single bridge domain. This is because EPGs define both forwarding policy and security segmentation simultaneously. This direct relationship between Bridge Domain (BD) and an EPG prevents the possibility of an EPG to span more than one bridge domain. This design requirement can be alleviated by ESGs. With ESGs, networking (i.e., forwarding policy) happens on the EPG/BD level, and security enforcement is moved to the ESG level.

Operationally, the ESG concept is similar to, and more straightforward than the original EPG approach. Just like EPGs, communication is allowed among any endpoints within the same group, but in the case of ESGs, this is independent of the subnet or BD they are associated with. For communication between different ESGs, we need contracts. That sounds familiar, doesn’t it? ESGs use the same contract constructs we have been using in ACI since inception.

So, what are the benefits of ESGs then? In a nutshell, where EPGs are bound to a single BD, ESGs allow you to define a security policy that spans across multiple BDs. This is to say you can group and apply policy to any number of endpoints across any number of BDs under a given VRF.  At the same time, ESGs decouple the forwarding policy, which allows you to do things like VRF route leaking in a much more simple and more intuitive manner.

ESG. A Simple Use Case Example


To give an example of where ESGs could be useful, consider a brownfield ACI deployment that has been in operation for years. Over time things tend to grow organically. You might find you have created more and more EPG/BD combinations but later realize that many of these EPGs actually share the same security profile. With EPGs, you would be deploying and consuming more contract resources to achieve what you want, plus potentially adding to your management burden with more objects to keep an eye on. With ESGs, you can now simply group all these brownfield EPGs and their endpoints and apply the common security policies only once. What is important is you can do this without changing anything having to do with IP addressing or BD settings they are using to communicate.

So how do I assign an endpoint to an ESG? You do this with a series of matching criteria. In the first release of ESGs, you were limited in the kinds of matching criteria. Starting from ACI 5.2(1), we have expanded matching criteria to provide more flexibility for endpoint classification and ease for the user. Among them: Tag Selectors (based on MAC, IP, VM tag, subnet), whole EPG Selectors, and IP Subnet Selectors. All the details about different selectors can be found here: https://www.cisco.com/c/en/us/td/docs/dcn/aci/apic/6x/security-configuration/cisco-apic-security-configuration-guide-60x/endpoint-security-groups-60x.html.

EPG to ESG Migration Simplified


In case where your infrastructure is diligently segmented with EPGs and contracts that reflect application tiers’ dependencies, ESGs are designed to allow you to migrate your policy with just a little effort.

The first question that most probably comes to your mind is how to achieve that? With the EPG Selector, one of the new methods of classifying endpoints into ESGs, we enable a seamless migration to the new grouping concept by inheriting contracts from the EPG level. This is an easy way to quickly move all your endpoints within one or more EPGs into your new ESGs.

For a better understanding, let’s evaluate the below example. See Figure 1. We have a simple two EPGs setup that we will migrate to ESGs. Currently, the communication between them is achieved with contract Ctr-1.

High-level migration steps are as follows:

1. Migrate EPG 1 to ESG 1
2. Migrate EPG 2 to ESG 2
3. Replace the existing contract with the one applied between newly created ESGs.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure-1- Two EPGs with the contract in place

The first step is to create a new ESG 1 where EPG 1 is matched using the EPG Selector. It means that all endpoints that belong to this EPG become part of a newly created ESG all at once. These endpoints still communicate with the other EPG(s) because of an automatic contract inheritance (Note: You cannot configure an explicit contract between ESG and EPG).

This state, depicted in Figure 2, is considered as an intermediate step of a migration, which the APIC reports with F3602 fault until you migrate outstanding EPG(s) and contracts. This fault is a way for us to encourage you to continue with a migration process so that all security configurations are maintained by ESGs. This will keep the configuration and design simple and maintainable. However, you do not have to do it all at once. You can progress according to your project schedule.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 2 – Interim migration step

As a next step, with EPG Selector, you migrate EPG 2 to ESG 2, respectively. Keep in mind that nothing stands in the way of placing other EPGs into the same ESG (even if these EPGs refer to different BDs). Communication between ESGs is still allowed with contract inheritance.

To complete the migration, as a final step, configure a new contract with the same filters as the original one – Ctr-1-1. Assign one ESG as a provider and the second as a consumer, which takes precedence over contract inheritance. Finally, remove the original Ctr-1 contract between EPG 1 and EPG 2. This step is shown in Figure 3.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 3 – Final setup with ESGs and new contract

Easy Migration to ACI


The previous example is mainly applicable when segmentation at the EPG level is already applied according to the application dependencies. However, not everyone may realize that ESG also simplifies brownfield migrations from existing environments to Cisco ACI.

A starting point for many new ACI customers is how EPG designs are implemented.  Typically, the most common choice is to implement such that one subnet is mapped to one BD and one EPG to reflect old VLAN-based segmentation designs (Figure 4). So far, moving from such a state to a more application-oriented approach where an application is broken up into tiers based on function has not been trivial. It has often been associated with the need to transfer some workloads between EPGs, or re-addressing servers/services, which typically leads to disruptions.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 4 – EPG = BD segmentation design

Introducing application-level segmentation in such a deployment model is challenging unless you use ESGs. So how do I make this migration from pure EPG to using ESG? With the new selectors available, you can start very broadly and then, when ready, begin to define additional detail and policy. It is a multi-stage process that still allows endpoints to communicate without disruption as we make the transition gracefully. In general, the steps of this process can be defined as follows:

1. Classify all endpoints into one “catch-all” ESG
2. Define new segmentation groups and seamlessly take out endpoints from “catch-all” ESG to newly created ESGs.
3. Continue until all endpoints are assigned to new security groups.

In the first step (Figure 5), you can enable free communication between EPGs, by classifying all of them using EPG selectors and putting them (temporarily) into one “catch-all” ESG. This is conceptually similar to any “permit-all” solutions you may have used prior to ESGs (e.g. vzAny, Preferred Groups).

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 5 – All EPGs are temporarily put into one ESG

In the second step (Figure 6), you can begin to shape and refine your security policy by seamlessly taking out endpoints from the catch-all ESG and putting them into other newly created ESGs that meet your security policy and desired outcome. For that, you can use other endpoint selector methods available – in this example – tag selectors. Keep in mind that there is no need to change any networking constructs related to these endpoints. VLAN binding to interfaces with EPGs remains the same. No need for re-addressing or moving between BDs or EPGs.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 6 – Gradual migration from an existing network to Cisco ACI

As you continue to refine your security policies, you will end up in a state where all of your endpoints are now using the ESG model. As your data center fabric grows, you do not have to spend any time worrying about which EPG or which BD subnet is needed because ESG frees you of that tight coupling. In addition, you will gain detailed visibility into endpoints that are part of an ESG that represent a department (like IT or Sales in the above example) or application suite. This makes management, auditing, and other operational aspects easier.

Intuitive route-leaking


It is well understood that getting Cisco ACI to interconnect two VRFs in the same or different tenants is possible without any external router. However, two additional aspects must be ensured for this type of communication to happen. First is regular routing reachability and the second is security permission.

In this very blog, I stated that ESG decouples forwarding from security policy. This is also clearly visible when you need to configure inter-VRF connectivity. Refer to Figure 7 for high-level, intuitive configuration steps.

Endpoint Security Groups (ESG), Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation
Figure 7. Simplified route-leaking configuration. Only one direction is shown for better readability

At the VRF level, configure the subnet to be leaked and its destined VRF to establish routing reachability. A leaked subnet must be equal to or be a subset of a BD subnet. Next attach a contract between the ESGs in different VRFs to allow desired communication to happen. Finally, you can put aside the need to configure subnets under the provider EPG (instead of under the BD only), and make adjustments to define the correct BD scope. These are not required anymore. The end result is a much easier way to set up route leaking with none of the sometimes confusing and cumbersome steps that were necessary using the traditional EPG approach.

Source: cisco.com

Thursday, 8 December 2022

Application Resource Management in Healthcare

Resource Management in Healthcare, Dell EMC Study, Dell EMC Preparation, Dell EMC Career, Dell EMC Skills, Dell EMC Jobs

Four Ways Healthcare Providers Have Benefited from Intersight Workload Optimizer


IT operations teams are like doctors. Doctors practice preventive medicine to help patients keep their health on track. When a patient’s health goes off track, the doctor minimizes symptoms through medication and rest, and they perform assessments to identify the root cause of the ailment.

In a similar way, IT operations teams keep their organizations’ mission-critical applications on track by providing computing, networking, and storage resources. Sometimes an application demonstrates symptoms indicating there’s something wrong (such as sluggish performance). If the root cause is serious enough and goes unaddressed, it can lead to downtime and impact the end user experience.

Treating the symptoms of poor application performance


Too often IT teams spend most of their time addressing the symptoms of underperforming applications or resuscitating them when they go offline. They’re alerted when there’s an issue, but they can’t easily pinpoint the root cause. This means the symptoms get treated to keep applications running, but the underlying cause or causes go untreated, which can lead to recurring application performance issues and costly staff time spent addressing them.

How to stay ahead of application resource issues


Application resource management solutions like Cisco Intersight Workload Optimizer (IWO) provide vital capabilities to help IT teams prevent application resource issues from occurring while optimizing costs to control their budgets.

Cisco Prep, Cisco Tutorial and Material, Cisco Skill, Cisco Jobs, Cisco Certification

Here are four examples where Cisco healthcare customers used application resource management to maintain the health of their organizations’ applications in fiscally responsible ways.

1) Ensuring mission-critical application performance

A healthcare services provider was experiencing performance issues with mission-critical applications. They couldn’t identify where in the stack the issues were originating from, so they used AppDynamics and IWO to gain deep visibility from their applications through their underlying computing infrastructure, particularly into hundreds of virtual machines. The visibility showed them when application performance began to stretch VM workloads and how to optimize their virtual environment to ensure continuous resources for optimal application performance. In addition to providing continuous up-time for their mission-critical applications, the customer has used IWO to optimize workloads in the public cloud and reduce public cloud spend by 40%.

2) Maintaining application performance at a lower cost

1) In order to provide continuous application uptime, a healthcare provider in the midwestern United States uses on-premises infrastructure and hosting services through a public cloud provider. However, the costs for on-premises infrastructure and cloud resources were rising rapidly and not sustainable. Using IWO’s “what-if” scenario planning, Cisco worked with the client’s IT group to demonstrate how they could right-size new server purchases and identify the most cost-effective cloud resources to meet their budget requirements. As a result, the healthcare provider can continue to deliver computing resources to provide experiences their application users expect while delivering tangible cost savings.

2) A healthcare provider in the southeastern United States and Cisco UCS customer needed to improve overall infrastructure availability, specifically by getting better insight into the real-time status of VMs and other computing resources. With a restricted IT budget, they also needed to extend the life of existing systems to reduce their CapEx expenses. Using IWO, the healthcare provider identified an opportunity to reduce the number of hosts by 50% while maintaining the same levels of utilization and avoiding unnecessary CapEx investments. At the same time, the healthcare provider used IWO to ensure workload configurations comply with its policies, which has helped the customer improve its HIPAA compliance posture.

3) Conducting an EHR cloud migration analysis

This healthcare provider needed to refresh its Epic hyperspace environment for its primary electronic health record (EHR) system. Their IT team was considering moving to the EHR provider’s cloud-based IaaS solution. The Cisco team used IWO to conduct a detailed total cost of ownership (TCO)/return on investment (ROI) analysis. The study showed the ability to maintain desired application performance with fewer servers (and less cost) than the EHR provider prescribed. The analysis revealed the healthcare provider would save $500,000 per month over three years, or $18 million, by using an on-premises UCS solution instead of the hosted solution. The healthcare provider also went on to use IWO to continue optimizing its virtual environment for ongoing application resource management and cost containment.

Keep your applications in shape through application resource management


As a healthcare provider, your patients, caregivers, and others rely on your applications. With solutions like IWO at your disposal, you have the power to adopt best practices in application resource management and ensure uptime to deliver the experiences your users expect while gaining cost-containment capabilities. Rise above treating the symptoms of an ailing infrastructure; exercise proactive application resource management with Cisco Intersight Workload Optimizer to keep your applications and infrastructure in outstanding shape.

Source: cisco.com

Tuesday, 8 November 2022

Introducing Cisco Cloud Network Controller on Google Cloud Platform – Part 3

Part 1 and Part 2 of this blog series covered native cloud networking and firewall rules automation on GCP, and a read through is recommended for completeness. This final post of the series is about enabling external access for cloud resources. More specifically, it will focus on how customers can enable external connectivity from and to GCP, using either Cloud Native Router or Cisco Cloud Router (CCR) based on Cisco Catalyst 8000v, depending on use case.

By expanding previous capabilities, Cisco Cloud Network Controller (CNC) will provision routing, automate VPC peering between infra and user VPCs, and BGP IPSec connectivity to external networks with only a few steps using the same policy model.

Scenario


This scenario will leverage the existing configuration built previously represented by network-a and network-b VPCs. These user VPCs will be peered with the infra VPC in a hub and spoke architecture, where GCP cloud native routers will be provisioned to establish BGP IPSec tunnels with an external IPSec device. The GCP cloud native routers are composed by the combination of a Cloud Router and a High-availability (HA) Cloud VPN gateway.

The high-level topology below illustrates the additional connections automated by Cisco CNC.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Provisioning Cloud Native Routers


The first step is to enable external connectivity under Region Management by selecting in which region cloud native routers will be deployed. For this scenario, they will be provisioned in the same region as the Cisco CNC as depicted on the high-level topology. Additionally, default values will be used for the IPSec Tunnel Subnet Pool and BGP AS under the Hub Network representing the GCP Cloud Router.

The cloud native routers are being provisioned purposely on a different region to illustrate the ability of having a dedicated hub network with external access. However, they could have been deployed on the same region as the user VPCs.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Note: a brief overview of the Cisco CNC GUI was provided on Part 1.

Enabling External Networks


The next step is to create an External Network construct within the infra tenant. This is where an external VRF is also defined to represent external networks connected to on-premises data centers or remote sites. Any cloud VRF mapped to existing VPC networks can leak routes to this external VRF or can get routes from it. In addition to the external VRF definition, this is also where VPN settings are entered with the remote IPSec peer details.

The configuration below illustrates the stitching of the external VRF and the VPN network within the region where the cloud native routers are being provisioned in the backend. For simplicity, the VRF was named as “external-vrf” but in a production environment, the name should be defined wisely and aligned to the external network as to improve operations.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

The VPN network settings require public IP of the remote IPSec device, IKE version, and BGP AS. As indicated earlier, the default subnet pool is being used.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Once the external network is created, Cisco CNC generates a configuration file for the remote IPSec device to establish BGP peering and IPSec tunnels with the GCP cloud native routers. Below is the option to download the configuration file.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Configuring External IPSec Device


As the configuration file provides most of the configuration required for the external IPSec device, customization is needed only on tunnel source interface and routing settings where applicable to match local network requirements. In this example, the remote IPSec device is a virtual router using interface GigabitEthernet1. For brevity, only one of the IPSec tunnels config is shown below along with all the other config generated by Cisco CNC.

vrf definition external-vrf
    rd 100:1
    address-family ipv4
    exit-address-family

interface Loopback0
    vrf forwarding external-vrf
    ip address 41.41.41.41 255.255.255.255

crypto ikev2 proposal ikev2-1
    encryption aes-cbc-256 aes-cbc-192 aes-cbc-128
    integrity sha512 sha384 sha256 sha1
    group 24 21 20 19 16 15 14 2

crypto ikev2 policy ikev2-1
    proposal ikev2-1

crypto ikev2 keyring keyring-ifc-3
    peer peer-ikev2-keyring
        address 34.124.13.142
        pre-shared-key 49642299083152372839266840799663038731

crypto ikev2 profile ikev-profile-ifc-3
    match address local interface GigabitEthernet1
    match identity remote address 34.124.13.142 255.255.255.255
    identity local address 20.253.155.252
    authentication remote pre-share
    authentication local pre-share
    keyring local keyring-ifc-3
    lifetime 3600
    dpd 10 5 periodic

crypto ipsec transform-set ikev-transport-ifc-3 esp-gcm 256
    mode tunnel

crypto ipsec profile ikev-profile-ifc-3
    set transform-set ikev-transport-ifc-3
    set pfs group14
    set ikev2-profile ikev-profile-ifc-3

interface Tunnel300
    vrf forwarding external-vrf
    ip address 169.254.0.2 255.255.255.252
    ip mtu 1400
    ip tcp adjust-mss 1400
    tunnel source GigabitEthernet1
    tunnel mode ipsec ipv4
    tunnel destination 34.124.13.142
    tunnel protection ipsec profile ikev-profile-ifc-3

ip route 34.124.13.142 255.255.255.255 GigabitEthernet1 192.168.0.1

router bgp 65002
    bgp router-id 100
   bgp log-neighbor-changes
    address-family ipv4 vrf external-vrf
        network 41.41.41.41 mask 255.255.255.255
        neighbor 169.254.0.1 remote-as 65534
        neighbor 169.254.0.1 ebgp-multihop 255
        neighbor 169.254.0.1 activate

Verifying External Connectivity status


Once configuration is applied, there are a few ways to verify BGP peering and IPSec tunnels between GCP and external devices: via CLI on the IPSec device itself and via Cisco CNC GUI on the External Connectivity dashboard.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

In the GCP console (infra project), under Hybrid Connectivity, it shows both the IPSec and BGP sessions are established accordingly by the combination of a Cloud Router and an HA Cloud VPN gateway automated by Cisco CNC, upon definition of the External Network. Note that the infra VPC network is named as overlay-1 by default as part of the Cisco CNC deployment from the marketplace.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Route Leaking Between External and VPC Networks


Now that BGP IPSec tunnels are established, let’s configure inter-VRF routing between external networks and existing user VPC networks from previous sections. This works by enabling VPC peering between the user VPCs and the infra VPC hosting VPN connections, which will share these VPN connections to external sites. Routes received on the VPN connections are leaked to user VPCs, and user VPC routes are advertised on the VPN connections.

Using inter-VRF routing, the route is leaked between the external VRF of the VPN connections and the cloud local user VRFs. The configuration below illustrates route leaking from external-vrf to network-a.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

The reverse route leaking configuration from network-a to external-vrf is filtered with Subnet IP to show granularity. Also, the same steps were performed for network-b but not depicted for brevity.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

In addition to the existing peering between network-a and network-b VPCs, now both user VPCs are also peered with the infra VPC (overlay-1) as depicted on the high-level topology.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

By exploring one of the peering connection details, it is possible to see the external subnet 41.41.41.41/32 in the imported routes table.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

On the remote IPSec device, the subnets from network-a and network-b VPCs are learned over BGP peering as expected.

remote-site#sh bgp vpnv4 unicast vrf external-vrf
<<<output omitted for brevity>>>
     Network          Next Hop            Metric LocPrf Weight Path
Route Distinguisher: 100:1 (default for vrf external-vrf)
 *>   41.41.41.41/32   0.0.0.0                  0         32768 i
 *    172.16.1.0/24    169.254.0.5            100             0 65534 ?
 *>                    169.254.0.1            100             0 65534 ?
 *    172.16.128.0/24  169.254.0.5            100             0 65534 ?
 *>                    169.254.0.1            100             0 65534 ?
remote-site#

Defining External EPG for the External Network


Up to this point, all routing policies were automated by Cisco CNC to allow external connectivity to and from GCP. However, firewall rules are also required for end-to-end connectivity. This is accomplished by creating an external EPG using subnet selection as the endpoint selector to represent external networks. Note that this external EPG is also created within the infra tenant and associated to the external-vrf created previously.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

The next step is to apply contracts between the external EPG and the previously created cloud EPGs to allow communication between endpoints in GCP and external networks, which in this scenario is represented by 41.41.41.41/32 (loopback0 on remote IPSec device). As this is happening across different tenants, the contract scope is set to global and exported from the engineering tenant to the infra tenant and vice-versa, if allowing traffic to be initiated from both sides.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials
To the cloud connectivity

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials
From the cloud connectivity

On the backend, the combination of contracts and filters translates into proper GCP firewall rules, as covered in details on Part 2 of this series. For brevity, only the outcome is provided below.

remote-site#ping vrf external-vrf 172.16.1.2 source lo0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.1.2, timeout is 2 seconds:
Packet sent with a source address of 41.41.41.41 !!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 84/84/86 ms

remote-site#ping vrf external-vrf 172.16.128.2 source lo0
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 172.16.128.2, timeout is 2 seconds:
Packet sent with a source address of 41.41.41.41 !!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 132/133/138 ms

root@web-server:/home/marinfer# ping 41.41.41.41
PING 41.41.41.41 (41.41.41.41) 56(84) bytes of data.
64 bytes from 41.41.41.41: icmp_seq=1 ttl=254 time=87.0 ms
64 bytes from 41.41.41.41: icmp_seq=2 ttl=254 time=84.9 ms
64 bytes from 41.41.41.41: icmp_seq=3 ttl=254 time=83.7 ms
64 bytes from 41.41.41.41: icmp_seq=4 ttl=254 time=83.8 ms
root@web-server:/home/marinfer# 

root@app-server:/home/marinfer# ping 41.41.41.41
PING 41.41.41.41 (41.41.41.41) 56(84) bytes of data.
64 bytes from 41.41.41.41: icmp_seq=1 ttl=254 time=134 ms
64 bytes from 41.41.41.41: icmp_seq=2 ttl=254 time=132 ms
64 bytes from 41.41.41.41: icmp_seq=3 ttl=254 time=131 ms
64 bytes from 41.41.41.41: icmp_seq=4 ttl=254 time=136 ms
root@app-server:/home/marinfer#

Advanced Routing Capabilities with Cisco Cloud Router


Leveraging native routing capabilities as demonstrated may suffice for some specific use cases and be limited for others. Therefore, for more advanced routing capabilities, Cisco Cloud Routers can be deployed instead. The provisioning process is relatively the same with CCRs also instantiated within the infra VPC in a hub and spoke architecture. Besides having the ability to manage the complete lifecycle of the CCRs from the Cisco CNC, customers can also choose different tier-based throughput options based on requirements.

One of the main use cases for leveraging Cisco Cloud Routers is the BGP EVPN support across different cloud sites running Cisco CNC, or for hybrid cloud connectivity with on-prem sites when policy extension is desirable. The different inter-site uses cases are being documented on specific white papers, and below is a high-level topology illustrating the architecture.

Cisco Cloud Network, Google Cloud Platform, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Learning, Cisco Tutorial and Materials

Source: cisco.com