Showing posts with label Virtualization. Show all posts
Showing posts with label Virtualization. Show all posts

Tuesday, 29 March 2022

Hyperconverged Infrastructure with Harvester: The start of the Journey

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills

Deploying and running data center infrastructure management – compute, networking, and storage – has traditionally been manual, slow, and arduous. Data center staffers are accustomed to doing a lot of command line configuration and spending hours in front of data center terminals. Hyperconverged Infrastructure (HCI) is the way out: It solves the problem of running storage, networking, and compute in a straightforward way by combining the provisioning and management of these resources into one package, and it uses software defined data center technologies to drive automation of these resources. At least in theory.

Recently, a colleague and I have been experimenting with Harvester, an open source project to build a cloud native, Kubernetes-based Hyperconverged Infrastructure tool for running data center and edge compute workloads on bare metal servers.

Harvester brings a modern approach to legacy infrastructure by running all data center and edge compute infrastructure, virtual machines, networking, and storage, on top of Kubernetes. It is designed to run containers and virtual machine workloads side-by-side in a data center, and to lower the total cost of data center and edge infrastructure management.

Why we need hyperconverged infrastructure

Many IT professionals know about HCI concepts from using products from VMWare, or by employing cloud infrastructure like AWS, Azure, and GCP to manage Virtual Machine applications, networking, and storage. The cloud providers have made HCI flexible by giving us APIs to manage these resources with less day-to-day effort, at least once the programming is done. And, of course, cloud providers handle all the hardware – we don’t need to stand up our own hardware in a physical location.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills
Multi-node Harvester cluster

However, most of the current products that support converged infrastructure tend to lock customers to using their company’s own technology, and they also usually come with licensing fees. Now, there is nothing wrong with paying for a technology when it helps you solve your problem. But single-vendor solutions can wall you off from knowing exactly how these technologies work, limiting your flexibility to innovate or react to issues.

If you could use a technology that combines with other technologies you are already required to know today – like Kubernetes, Linux, containers, and cloud native – then you could theoretically eliminate some of the headaches of managing edge compute / data centers, while also lowering costs.

This is what the people building Harvester are attempting to do.

Adapting to the speed of change


Cloud providers have made it easier to deploy and manage the infrastructure surrounding applications. But this has come at the expense of control, and in some cases performance.

HCI, which the cloud providers support and provide, gets us some control back. However, the recent rise of application containers, over virtual machines, changed again how infrastructure is managed and even thought of, by abstracting layers of application packaging, all while making that packaging lighter weight than last-generation VM application packaging. Containers also provide application environments that are  faster to start up, and easier to distribute because of the decreased image sizes. Kubernetes takes container technologies like Docker to the next level by adding in networking, storage, and resource management between containers, in an environment that connects everything together. Kubernetes allows us to integrate microservice applications with automation and speedy deployments.

Kubernetes offers an improvement on HCI technologies and methodologies. It provides a better way for developers to create cloud agnostic applications, and to spin up workloads in containers more quickly than traditional VM applications. Kubernetes did not aim to replace HCI, but it did make a lot of the goals of software deployment and delivery simpler, from an HCI perspective.

In a lot of environments, Kubernetes runs inside VMs. So you still need external HCI technology to manage the underlying infrastructure for the VMs that are running Kubernetes. The problem now is that if you want to run your application in Kubernetes containers on infrastructure you have control of, you have different layers of HCI to support.  Even if you get better application management with Kubernetes, infrastructure management becomes more complex. You could try to use vanilla Kubernetes for every part of your edge-compute / data center stack and run it as your bare metal operating system instead of traditional HCI technologies, but you have to be ok migrating all workloads to containers, and in some cases that is a high hurdle to clear, not to mention the HCI networking that you will need to migrate over to Kubernetes.

The good news is that there are IoT and Edge Compute projects that can help. The Rancher organization, for example is creating a lightweight version of Kubernetes, k3s, for IoT compute resources like the Raspberry Pi and Intel NUC computers. It helps us push Kubernetes onto more bare metal infrastructure. Other orgs, like KubeVirt, have created technologies to run virtual machines inside containers and on top of Kubernetes, which has helped with the speed of deployment for VMs, which then allow us to use Kubernetes for our virtual networking layers and all application workloads (container and VMs). And other technology projects, like Rook and Longhorn, help with persistent storage for HCI through Kubernetes.

If only these could combine into one neat package, we would be in good shape.

Hyperconverged everything


Knowing where we have come from in the world of Hyperconverged Infrastructure for our Data Centers and our applications, we can now move on to what combines all these technologies together. Harvester packages up k3s (light weight Kubernetes), KubeVirt (VMs in containers), and Longhorn (persistent storage) to provide Hyperconverged Infrastructure for bare metal compute using cloud native technologies, and wraps an API / Web GUI bow on it to for convenience and automation.

Source: cisco.com

Friday, 4 December 2020

All Tunnels Lead to GENEVE

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides

As a global citizen, I’m sure you came here to read about Genève (French) or Geneva (English), the city situated in the western part of Switzerland. It’s a city or region famous for many reasons including the presence of a Cisco R&D Center in the heart of the Swiss Federal Institute of Technology in Lausanne (EPFL). While this is an exciting success story, the GENEVE I want to tell you about is a different one.

GENEVE stands for “Generic Network Virtualization Encapsulation” and is an Internet Engineering Task Force (IETF) standards track RFC. GENEVE is a Network Virtualization technology, also known as an Overlay Tunnel protocol. Before diving into the details of GENEVE, and why you should care, let’s recap the history of Network Virtualization protocols with a short primer.

Network Virtualization Primer

Over the course of years, many different tunnel protocols came into existence. One of the earlier ones was Generic Routing Encapsulation (GRE), which became a handy method of abstracting routed networks from the physical topology. While GRE is still a great tool, it lacks two main characteristics that hinder its versatility:

1. The ability to signal the difference of the tunneled traffic, or original traffic, to the outside—the Overlay Entropy—and allow the transport network to hash it across all available links.

2. The ability to provide a Layer-2 Gateway, since GRE was only able to encapsulate IP traffic. Options to encapsulate other protocols, like MPLS, were added later, but the ability to bridge never became an attribute of GRE itself.

With the limited extensibility of GRE, the network industry became more creative as new use-cases were developed. One approach was to use Ethernet over MPLS over GRE (EoMPLSoGRE) to achieve the Layer-2 Gateway use case. Cisco called it Overlay Tunnel Virtualization (OTV). Other vendors referred to it as Next-Generation GRE or NVGRE. While OTV was successful, NVGRE had limited adoption, mainly because it came late to Network Virtualization and at the same time as the next generation protocol, Virtual Extensible LAN (VXLAN), was already making inroads.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
A Network Virtualization Tunnel Protocol

VXLAN is currently the de-facto standard for Network Virtualization Overlays. Based on the Internet Protocol (IP), VXLAN also has an UDP header and hence belongs to the IP/UDP-based encapsulations or tunnel protocols. Other members of this family are OTV, LISP, GPE, GUE, and GENEVE, among others. The importance lays in the similarities and their close relation/origin within the Internet Engineering Task Force’s (IETF) Network Virtualization Overlays (NVO3) working group.

Network Virtualization in the IETF


The NVO3 working group is chartered to develop a set of protocols that enables network virtualization for environments that assume IP-based underlays—the transport network. A NVO3 protocol will provide Layer-2 and/or Layer-3 overlay services for virtual networks. Additionally, the protocol will enable Multi-Tenancy, Workload Mobility, and address related issues with Security and Management.

Today, VXLAN acts as the de-facto standard of a NVO3 encapsulation with RFC7348 ratified in 2014. VXLAN was submitted as an informational IETF draft and then become an informational RFC. Even with its “informational” nature, its versatility and wide adoption in Merchant and Custom Silicon made it a big success. Today, we can’t think of Network Virtualization without VXLAN. When VXLAN paired up with BGP EVPN, a powerhouse was created that became RFC8365—a Network Virtualization Overlay Solution using Ethernet VPN (EVPN) that is an IETF RFC in standards track.

Why Do We Need GENEVE if We Already Have What We Need?


When we look to the specifics of VXLAN, it was invented as a MAC-in-IP encapsulation over IP/UDP transport, which means we always have a MAC-header within the tunneled or encapsulated packets. While this is desirable for bridging cases, with routing it becomes unnecessary and could be optimized in favor of better payload byte usage. Also, with the inclusion of an inner MAC-header, signaling of MAC to IP bindings becomes necessary, which needs either information exchanged in the control-plane or, much worse, flood-based learning.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Compare and Contrast VXLAN to GENEVE Encapsulation Format

Fast forward to 2020, GENEVE has been selected as the upcoming “standard” tunnel protocol. While the flexibility and extensibility for GENEVE incorporates the GRE, VXLAN, and GPE use-cases, new use-cases are being created on a daily basis. This is one of the most compelling but also most complex areas for GENEVE. GENEVE has a flexible option header format, which defines the length, the fields, and content depending on the instruction set given from the encapsulating node (Tunnel Endpoint, TEP). While some of the fields are simple and static, like bridging or routing, the fields and format used for telemetry or security are highly variable for hop-by-hop independence.

While GENEVE is now an RFC, GBP (Group Based Policy), INT (In-band Network Telemetry) and other option headers are not yet finalized. However, the use-case coverage is about equal to what VXLAN is able to do today. Use cases like bridging and routing for Unicast/Multicast traffic, either in IPv4 or IPv6 or Multi-Tenancy, have been available for VXLAN (with BGP EVPN) for almost a decade. With GENEVE, all of these use-cases are accessible with yet another encapsulation method.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
GENEVE Variable Extension Header

With the highly variable but presently limited number of standardized and published Option Classes in GENEVE, the intended interoperability is still pending. Nevertheless, GENEVE in its extensibility as a framework and forward-looking technology has great potential. The parity of today’s existing use cases for VXLAN EVPN will need to be accommodated. This is how the IETF prepared BGP EVPN from its inception and more recently published the EVPN draft for GENEVE.

Cisco Silicon Designed with Foresight, Ready for the Future


While Network Virtualization is already mainstream, the encapsulating node or TEP (Tunnel Endpoint) can be at various locations. While a tunnel protocol was often focused on a Software Forwarder that runs on a simplified x86 instruction set, mainstream adoption is often driven by the presence of Software as well as Hardware forwarder, the latter built into the switch’s ASIC (Merchant or Custom Silicon). Even though integrated hybrid overlays are still in their infancy, the use of Hardware (the Network Overlay) and Software (the Host Overlay) in parallel are widespread, either in isolation or as ships in the night. Often it is simpler to upgrade the Software forwarder on a x86 server and benefit from a new encapsulation format. While this is generally true, the participating TEPs require consistency for connections needed with the outside world and updating the encapsulation to such gateways is not a simple matter.

In the past, rigid Router or Switch silicon prevented fast adoption and evolution of Network Overlay technology. Today, modern ASIC silicon is more versatile and can adapt to new use cases as operations constantly change to meet new business challenges. Cisco is thinking and planning ahead to provide Data Center networks with very high performance, versatility, as well as investment protection. Flexibility for network virtualization and versatility of encapsulation was one of the cornerstones for the design of the Cisco Nexus 9000 Switches and Cloud Scale ASICs.

We designed the Cisco Cloud Scale ASICs to incorporate important capabilities, such as supporting current encapsulations like GRE, MPLS/SR and VXLAN, while ensuring hardware capability for VXLAN-GPE and, last but not least, GENEVE. With this in mind, organizations that have invested in the Cisco Nexus 9000 EX/FX/FX2/FX3/GX Switching platforms are just a software upgrade away from being able to take advantage of GENEVE.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Guides
Cisco Nexus 9000 Switch Family

While GENEVE provides encapsulation, BGP EVPN is the control-plane. As use-cases are generally driven by the control-plane, they evolve as the control-plane evolves, thus driving the encapsulation. Tenant Routed Multicast, Multi-Site (DCI) or Cloud Connectivity are use cases that are driven by the control-plane and hence ready with VXLAN and closer to being ready for GENEVE.

To ensure seamless integration into Cisco ACI, a gateway capability becomes the crucial base functionality. Beyond just enabling a new encapsulation with an existing switch, the Cisco Nexus 9000 acts as a gateway to bridge and route from VXLAN to GENEVE, GENEVE to GENEVE, GENEVE to MPLS/SR, or other permutations to facilitate integration, migration, and extension use cases.

Leading the Way to GENEVE


Cisco Nexus 9000 with a Cloud Scale ASIC (EX/FX/FX2/FX3/GX and later) has extensive hardware capabilities to support legacy, current, and future Network Virtualization technologies. With this investment protection, Customers can use ACI and VXLAN EVPN today while being assured to leverage future encapsulations like GENEVE with the same Nexus 9000 hardware investment. Cisco thought leadership in Switching Silicon, Data Center networking and Network Virtualization leads the way to GENEVE (available in early 2021).

If you are looking to make your way to Geneve or GENEVE, Cisco makes investments in both for the past, present, and future of networking.

Saturday, 10 October 2020

Economic Benefits of Virtualizing the CCAP Core with a Microservices Based Architecture

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Tutorial and Material, Cisco Cert Exam

Architecture

Service providers are going through a digitalization journey. And one aspect of that journey is the virtualization of their service delivery infrastructure. At Cisco, we are making that transition easier for our customers by creating a common virtualization platform across mobile 5G, cable vCCAP, and Telco vBNG. This helps operators reduce their cost to virtualize the infrastructure and enable them to rapidly tap into new revenue opportunities.

When it comes to virtualization for cable, we did not virtualize the legacy CCAP, we re-architected the platform from the ground up to come up with a microservices-based architecture. That is what became our Cloud-native Broadband Router(cnBR). Why? That was the only way to get to ours and our customers’ end goal which is a hybrid, Multi-cloud, and Multi-Access Edge Compute(MEC) based cable broadband platform. cnBR has four major types of microservices: Data Plane(DP), Control Plane(CP), Real-Time(RT), and Management Plane(MP) that we can deploy at any location in the network or the cloud. cnBR’s microservices-based architecture enables webscale operations such as auto-healing, autoscaling, load balancing, and fault-tolerance at the infrastructure layer.

Evolution

With cnBR’s microservices-based architecture, you can start with a simple on-prem appliance like architecture that is familiar to your operations and IT organization. And as you gain familiarity, you can evolve into a hybrid and multi-cloud world by moving some of the microservices to public cloud platforms. The move of some of the microservices to public cloud platforms such as GCP, Azure, and AWS will reduce operational burden, extend reach, and augment capacity. Figure 1 shows the phased deployment evolution of the cnBR architecture:

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Tutorial and Material, Cisco Cert Exam

Figure 1: Evolution of cnBR Architecture

Phase 1: Centralized on-Prem and cloud-native appliance – In this phase, you will start with all the microservices running in the hub or Datacenter.

Phase 2: Multi-Access Edge Compute(MEC) – Here, you will slowly move some of the microservices closer to the edge in an Edge compute platform or a node that has a compute board to enable a MEC architecture. This will focus on shifting the data plane(DP) and real-time(RT) microservice to MEC platform.

Phase 3: Hybrid Cloud – This phase moves the management plane(MP) and control plane(CP) microservices to a private or public cloud and keeps data plane(DP) & Real-Time(RT) microservices at the edge

Phase 4: Multi-cloud – This phase provides flexibility in enabling the management and control plane microservices to run in any public cloud environments with minimal friction.

 Why Migrate to a Microservices based Architecture?

With microservices-based architecture, you can improve:

– Time to market: you can get features in weeks vs months, 

– Agility:  you enable hitless and maintenance windowless upgrades, 

– Scale: you gain seamless and on-demand auto-scaling which gives you unprecedented cluster level redundancy and scale,

– Cost: you can lower TCO with reduced footprint and facilities cost.

Why Cisco’s cnBR?

The virtualization of the access infrastructure is one way to add more capacity. To better understand how operators can virtualize and reap immediate business benefits with cnBR, we looked at CAPEX and standard operational costs like space, power & cooling while increasing the scale of the microservices-based architecture. We also did the same scaling and cost analysis of a legacy appliance-based CCAP solution so we can compare the savings. What we found is a compelling business value of going to a microservices-based architecture. These are additional benefits to the service/feature velocity and operational efficiency enabled by agile webscale operations of microservices-based architecure.

The analysis included scenarios where bandwidth per service group is increased from 1 Gbps to 5 Gbps in the downstream while the upstream is increased from 100 Mbps to 500 Mbps. The average Capex Savings was 29%, average OPEX savings were 42% and the average space(RU) savings were 73%. Figure 2 highlights the savings as the bandwidth scales up.

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Tutorial and Material, Cisco Cert Exam

Figure 2 . Business benefits of cnBR as system bandwidth scales up

To help do your own analysis, we have created an easy to use vCCAP economics calculator. You can do your own analysis based on your current network configuration and long-range plan(LRP). Figure 3 highlights the type of summary output you can get from the vCCAP Economics Tool.

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Tutorial and Material, Cisco Cert Exam

Figure 3. Summary of Capex, Opex comparison between cnBR and traditional CCAP

Thursday, 11 June 2020

Why 5G is Changing our Approach to Security

Cisco Prep, Cisco Exam Prep, Cisco Guides, Cisco Tutorial and Material, Cisco Security

While earlier generations of cellular technology (such as 4G LTE) focused on ensuring connectivity, 5G takes connectivity to the next level by delivering connected experiences from the cloud to clients. 5G networks are virtualized and software-driven, and they exploit cloud technologies. New use cases will unlock countless applications, enable more robust automation, and increase workforce mobility. Incorporating 5G technology into these environments requires deeper integration between enterprise networks and 5G network components of the service provider. This exposes enterprise owners (including operators of critical information infrastructure) and 5G service providers to risks that were not present in 4G. An attack that successfully disrupts the network or steals confidential data will have a much more profound impact than in previous generations.

5G technology will introduce advances throughout network architecture such as decomposition of RAN, utilizing API, container-based 5G cloud-native functions, network slicing to name a few. These technological advancements while allowing new capabilities, also expand the threat surface, opening the door to adversaries trying to infiltrate the network. Apart from the expanded threat surface, 5G also presents the security team with an issue of a steep learning curve to identify and mitigate threats faster without impacting the latency or user experience.

What are Some of the Threats?


Virtualization and cloud-native architecture deployment for 5G is one of the key concerns for service providers. Although virtualization has been around for a while, a container-based deployment model consisting of 5G Cloud Native Functions (CNFs) is a fresh approach for service providers. Apart from the known vulnerabilities in the open-source components used to develop the 5G CNFs, most CNF threats are actually unknown, which is riskier. The deployment model of CNFs in the public and private cloud brings in another known, yet the widespread problem of inconsistent and improper access control permissions putting sensitive information at risk.

5G brings in network decomposition, disaggregation into software and hardware, and infrastructure convergence which underpins the emergence of edge computing network infrastructure or MEC (Multi-Access Edge Compute). 5G Edge computing use cases are driven by the need to optimize infrastructure through offloading, better radio, and more bandwidth to fixed and mobile subscribers. The need for low latency use cases such as Ultra-Reliable Low Latency Communication (URLLC) which is one of several different types of use cases supported by 5G NR, requires user plane distribution. Certain 5G specific applications and the user plane need to be deployed in the enterprise network for enterprise-level 5G services. The key threats in MEC deployments are fake/rogue MEC deployments, API-based attacks, insufficient segmentation, and improper access controls on MEC deployed in enterprise premises.

5G technology will also usher in new connected experiences for users with the help of massive IoT devices and partnerships with third-party companies to allow services and experiences to be delivered seamlessly. For example, in the auto industry, 5G combined with Machine Learning-driven algorithms will provide information on traffic, accidents and process peer to peer traffic between pedestrian traffic lights and vehicles in use cases such as Vehicle to Everything (V2X). Distributed Denial of Service (DDoS) in these use cases are a very critical part of the 5G threat surface.

What are Some of the Solutions to Mitigate Threats?


Critical infrastructure protection: Ensure your critical software, technologies, and network components such as Home Subscriber Server (HSS), Home Location Register (HLR), and User Defined Routing (UDR) are secured with the right controls.

Cisco Secure Development Lifecycle: Being cloud-native and completely software-driven, 5G uses open source technologies. Although this is critical for scalability and allowing cloud deployment integrations, vulnerabilities from multiple open-source applications could be exploited by attackers. To reduce the attack surface, service providers need to verify the 5G vendor-specific secure development process to ensure hardened software and hardware. We offer security built into our architectural components. Our trustworthy systems’ technology includes trust anchor, secure boot, entropy, immutable identity, image signing, common cryptography, secure storage, and run-time integrity.

Vendor Assessment (security): It’s critical to validate the vendor supply chain security, secure your organization’s development practices from end to end, and employ trustworthy products. You must also be vigilant when it comes to continuously monitor hardware, software, and operational integrity to detect and mitigate infrastructure and service tampering. Sophisticated actors are looking to silently gain access and compromise specific behavior in the network. These attackers seek to take control of network assets to affect traffic flows or to enable surveillance by rerouting or mirroring traffic to remote receivers. Once they have control, they might launch “man-in-the-middle” attacks to compromise critical services like Domain Name System (DNS) and Transport Layer Security (TLS) certificate issuance.

Secure MEC & Backhaul: 5G edge deployments will supply virtualized, on-demand resource, an infrastructure that connects servers to mobile devices, to the internet, to the other edge resources and operational control system for management & orchestration. These deployments should have the right security mechanisms in the backhaul to prevent rogue deployments and right security controls to prevent malicious code deployments and unauthorized access. As these MEC deployments will include the dynamic virtualized environments, securing these workloads will be critical. Cisco workload protection, will help service providers to secure the workloads. Cisco’s Converged 5G xHaul Transport will provide the service providers with the right level of features for secure 5G transport.

Cisco Ultra Cloud Core allows the user plane to support a full complement of inline services. These include Application Detection and Control (ADC), Network Address Translation (NAT), Enhanced Charging Service (ECS), and firewalls. Securing the MEC would require multiple layers of security controls based on the use case and the deployment mode. Some of the key security controls are:

• Cisco Security Gateway provides security gateway features along with inspections on GTP, SCTP, Diameter, and M3UA.

• Secure MEC applications: Securing virtualized deployments on the MEC and centralized 5GC requires a smarter security control rather than just having firewalls, be it hardware or virtualized. Cisco Tetration provides multi-layered cloud workload protection using advanced security analytics and speedy detections.

• Secure MEC access: Securing user access to MEC can be catered by utilizing the Zero Trust methodology, which is explained in greater detail below.

Utilizing zero trust security controls during 5G deployment is critical for service providers. This is particularly important in the deployment phase where there will be multiple employees, vendors, contractors, and sub-contractors deploying and configuring various components and devices within the network. The old method of just providing a VPN as a security control is insufficient, as the device used by the configuration engineer might have an existing malicious code that might be deployed within the 5G infrastructure. This whitepaper gives you more insights on how zero trust security could be applied to 5G deployments.

End to End Visibility: 5G brings in distributed deployments, dynamic workloads, and encrypted interfaces like never before. This requires end-to-end visibility to ensure proper security posture. Advanced threat detection and encryption methods can identify malware in encrypted traffic without requiring decryption. And because latency is very important in 5G, we can’t use traditional methods of distributed certificates, decrypting traffic, analyzing the data for threats, and then encapsulating it again, as this adds too much latency into the network. Cisco Stealthwatch is the only solution that detects threats across the private network, public cloud, and even in encrypted traffic, without the need for decryption.

Source: Cisco.com

Thursday, 23 April 2020

Automation, Learning, and Testing Made Easier With Cisco Modeling Labs Enterprise v2.0

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep

Cisco Modeling Labs – Enterprise v2.0 is here, sporting a complete rewrite of the software and a slew of cool, new features to better serve your education, network testing, and CI/CD automation needs. Version 2.0 still gives you a robust network simulation platform with a central store of defined and licensed Cisco IOS images, and now it also provides a streamlined HTML5 user interface with a lean backend that leaves more resources free to run your lab simulations.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
CML 2.0 Workbench  

This attention to streamlining and simplification extends to installation and getting started as well. You can install and configure Cisco Modeling Labs – Enterprise v2.0 in no time. And you’ll be building labs in as little as ten minutes.

As you use Cisco Modeling Labs to virtualize more and more network testing processes, topologies can grow quite large and complex. This can strain host resources such as memory and CPU. So after the nodes start, the Cisco Modeling Labs engine uses Linux Kernel same-page merging, or KSM to optimize the lab memory footprint. KSM essentially allows Cisco Modelings Labs to deduplicate the common memory blocks that each virtual node’s OS uses. The result? More free memory for labs.

API First

The HTML5 UI only scratches the surface of what’s new. Cisco Modeling Labs – Enterprise v2.0 is an “API first” application. Each of the operations performed in the UI – adding labs, adding nodes, positioning nodes on a topology canvas, creating links, starting up a simulation, and so forth – are all powered by a rich RESTful API. With this API, you can tie Cisco Modeling Labs into network automation workflows such as Infrastructure as Code pipelines, so you can test network configuration changes before deploying them in production.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
CML API In Action

To make it even easier to integrate Cisco Modeling Labs – Enterprise v2.0 into your NetDevOps toolchains, the software includes a Python client library to handle many of the lower-level tasks transparently, allowing you to focus on the fun bits of putting network simulation right into your workflows. For example, the client library already drives an Ansible module to automate lab creation and operation.

Cisco Prep, Cisco Tutorial and Materials, Cisco Learning, Cisco Certification, Cisco Exam Prep
The CML Python Client Library

Flexible Network and Service Integration


Sometimes your virtual lab needs to talk to physical devices in the “real” world. Cisco Modeling Labs – Enterprise v2.0 makes it simple to connect virtual topologies with external networks in either a layer 3 network address translation (NAT) mode or a layer 2 bridged mode. In bridged mode, the connect node shares the Virtual Network Interface Card (vNIC) of the Cisco Modeling Labs VM. So nodes can participate in routing protocols like OSPF, EIGRP, and multicast groups, with physical network elements and hosts. This lets you integrate external services and tools with your virtual labs. For example, an external network management application can monitor or configure your virtual nodes.

But you can also clone some of these services directly into your virtual labs. Cisco Modeling Labs – Enterprise v2.0 includes images for Ubuntu Linux, CoreOS, and an Alpine Linux desktop. With these, you can run network services, spin up Docker containers, and drive graphical UIs directly from Cisco Modeling Labs. Don’t want to use the web interface to access consoles and Virtual Network Computing (VNC)? Cisco Modeling Labs includes a “breakout tool” that maps ports on your local client to nodes within a lab. So you can use whatever terminal emulator or VNC client you want to connect to your nodes’ consoles and virtual monitors.

Sunday, 5 January 2020

Next Generation Data Center Design With MDS 9700 – Part III

This week is exciting, had opportunity to sit on round table with Cisco’s largest customers on an open ended architecture discussion and their take on past, present and future. More on that some other time let’s pick up last critical aspect of High Performance Data Center design namely flexibility. Customers need flexibility to adapt to changing requirements over time as well as to support diverse requirements of their users. Flexibility is not just about protocol, although protocol is very important aspect, but it is also about making sure customers have choice to design, grow and adapt their DC according to their needs. As an example if customers want to utilize the time to market advantage and ubiquity of Ethernet they can by adopt FCoE.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Moreover flexibility has to be complemented by seamless integration where customers can not only mix and match the architectures/protocols/speeds but also evolve from one to other over time with minimal disruption and without forklift upgrades. Investment protection of more than a decade on Cisco director switches allows customer to move to higher speeds, or adopt new protocols using the existing chassis and fabric cards. Finally any solution should allow scalability over time with minimal disruptions and common management model. As an example on MDS 9710 or MDS 9706 customers can choose to use 2/4/8 G FC, 4/8/16G FC, 10G FC or 10G FCoE at each hop.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Let’s review each aspect of flexibility at a time.

Architecture:

Cisco SAN product family is designed to support Architecture flexibility. From smallest to  the largest customers and everything in-between.  Customers can grow from 12 16G ports to 48 ports on a single 9148S. They can grow from 48 16G Line Rate Ports to 192 16G Line Rate with MDS 9710 and upto 384 ports on MDS 9710. Finally having seamless FC and FCoE capability allows customers to use these directors as edge or core switches . With the industry leading scalability numbers, customers can scale up or scale out as per their needs. Two examples show how customers can use Director class switches (9513, 9506, 9710 or 9706) based Architecture for End of Row designs. Similarly customers can orchestrate Top of Rack designs using Nexus fixed family or MDS 9148S.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

If they want to continue with FC for foreseeable future or have sizable FC infrastructure that they want to leverage (and have option to go to FCOE) then MDS serves their needs. Similarly they can support edge core designs, and edge core edge designs or even collapsed  cores if so desired.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

If customers need converged switch then Nexus 2K, 5K and 6K provides the flexibility, ability to collapse two networks, simplify management as shown in the picture below.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Speeds

Customers can mix and match the FC speeds 2G/4G/8G, 4G/8G/16G on the latest MDS 9148S, and MDS 9700 product family. With all the major optics supported, customers can pick and choose optics for the smallest distance to long distance CWDM and DWDM solutions in addition to SW, LW and ER optics choices. In addition MDS 9700 supports 10GE optics running 10G FC traffic for ease of implementing 10G DWDM solutions based on ubiquitous 10GE circuits.

Protocol

FC is a dominant protocol with DC but at the same time a lot of customers are adopting FCoE to improve ROI, simplify the network or simply to have higher speeds and agility. Irrespective of the needs and timeline MDS solution allows customer to adopt FCoE today or down the road without forklift upgrades on the existing MDS 9700 platforms while leveraging the existing FC install base.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

The diagram above shows how customers can collapse LAN and SAN networks on the edge into one network. The advantage of FEX include reduced TCO, simplified operations (Parent switch provides a single point of management and policy enforcement and Plug-and-play management includes auto-configuration).

Another example to allow non transition less disruptive for customers Cisco has supported the BiDi optics on the Nexus product family. This allows customers to use the the same same OM2, OM3 and OM4 fabrics for 40G FCoE connectivity and still don;t have to rip and replace cabling plant.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

For customer who are not ready to converge networks but want to achieve faster time to market, higher performance, Ethernet scale economies can use separate LAN and SAN network and use FCoE for that dedicated SAN .

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Coupled with broad Cisco product portfolio means that customers have the maximum flexibility to tune the architecture precisely to their needs. Cisco product portfolio is tightly integrated, all the SAN switches use same NxOS and DCNM provides seamless manageability across LAN, SAN, Converged infrastructure to Fabric Interconnects on UCS.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

From the last 3 blogs lets quickly capture what are the unique characteristics of MDS 9700 that allows for High Performance Scalable Data Center Design.

◉ Performance

24 Tbps Switching capacity, line rate 16g FC ports, No Oversubscription, local switching or bandwidth allocation.

◉ Reliability

Redundancy for every critical component in the chassis including Fabric Card. Data Resiliency with CRC check and Forward Error Correction. Multiple level of CRC checks, smaller failure domains.

Friday, 3 January 2020

Next Generation Data Center Design With MDS 9710 – Part II

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep

EMC World was wonderful. It was gratifying to meet industry professionals,  listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition.  Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of  interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.

Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
So let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability  has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC  or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. At part of writing this white paper ESG tested the Fabric Card redundancy in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
The most important aspect for all of this is for customer is to be educated.

Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.

We will continue our discussion for the Flexibility aspect of MDS 9710.

Thursday, 2 January 2020

Next Generation Data Center Design With MDS 9710 – Part I

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep

Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
MDS 9700 provides unprecedented

◉ Performance – 24 Tbps Switching capacity

◉ Reliability – Redundancy for every critical component in the chassis including Fabric Card

◉ Flexibility – Speed, Protocol, DC Architecture

In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.

In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend

◉ Throughput

◉ Latency

◉ Consistency

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
Switching infrastructure should provide line rate, non-blocking, high speed throughput to effectively power applications like VDI, High Performance Computing, High Frequency Trading, Big Data among others. Crutches like local switching, per port bandwidth allocation and oversubscription result in inflexible and complex design that breaks down every few years resulting in fork lift upgrades or running the DC design at sub-par performance levels.

Applications need both high through put and consistent latency. The switching latency is usually orders of magnitude less than that of the rest of the components in the data path. Thus the performance that applications can deliver is based on the end to end latency of the data path.

For both throughput and latency the most important factor that is often overlooked is consistency. Throughput and low latency should be consistent and independent of switching traffic profiles, network connectivity and traffic load .

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
MDS 9700 allows for high performance DC design with

◉ 3X the performance of any director class switch

◉ Line Rate, Non Blocking Performance without limitations

◉ Consistent throughput and latency

Key Cisco innovations like Central Arbiter, Crossbar, Virtual Output Queues enable the consistent low latency and high throughput independent of the traffic profile or load on the chassis. Performance without high availability or data reliability is not good throughput.

Wednesday, 1 January 2020

MDS 9700 Scale Out and Scale Up

This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.

Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs

◉ Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)

◉ The process should not be disruptive to the current installation for cabling, performance impact or downtime

◉ The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level

Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift. 

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.

Wednesday, 18 July 2018

Why Cisco SD-Branch is better than a ‘white box’

A typical branch office IT installation consists of multiple point products, each having a specific function, engineered into a rigid topology. Changing something in that chain, be it adding a new function or connection, increasing bandwidth, or introducing encryption, affects multiple separate products. This introduces risk, increases time to test, and increases roll-out time. If any piece of equipment requires a physical change, the time and personnel costs multiply.  And with that additional roll-out time may delay your business objectives and cause productivity and innovation to suffer.

To help solve these challenges enterprise and service providers are redesigning the branch WAN network to consolidate network services from several dedicated hardware appliance types into virtualized on-demand applications running on software at the branch office with centralized orchestration and management – the Software-Defined Branch (SD-Branch).

Choosing the right hardware platform for these applications to run on is important when deploying a SD-Branch. For the SD-Branch customers have several choices which range from commercial off the shelf PC’s or larger servers – aka ‘white boxes’, to purpose built SD-Branch platforms or even a blade/module inserted into an existing router to add SD-Branch services, all of which can run x86 based applications.  These white boxes may not be the best choice for running or managing networking services since they are mostly a collection of disparate applications loaded onto a device that may not have been built for a branch office environment and lacks sufficient resources for running network services for the branch, and are difficult to integrate together and manage all the elements including; the hardware platform, network services and applications as a whole.

The Cisco SD-Branch solution takes multiple functions previously existing as discrete hardware appliances and instead deploying these as virtual network functions (VNFs) hosted on an x86-based compute platform.  The Cisco SD-Branch delivers physical consolidation, saving space and power and fewer points of failure, and substantially improves IT agility with on-demand services with centralized orchestration and management. Changes can be made quickly, automated, and delivered without truck rolls in minutes for what used to take weeks/months.

Hardware Hosting Platform – Pros and Cons


So how does one assemble a functioning and manageable deployment using white box hardware and various software frameworks without achieving a wobbly stack of uncertainty?  First, the hardware matters.  A SD-Branch hardware platform can be any x86-based server, a server blade that runs inside your existing routing platform, or a purpose built platform that provides options for specialized interfaces for WAN (T1, xDSL, Serial, etc.) and 4G/LTE access.  It should be built for the Enterprise office environment – form factor, acoustics, multi-core capable, WAN/LAN ports with the option to support PoE, etc.  Additionally, data encryption has become a mandatory requirement for providing data privacy and security.

Also when selecting a platform for the SD-Branch it is important to ensure that the performance will scale for the required VNFs and services and is built with enterprise-class components. Second, having an Operating System (OS) or hypervisor that can meet the needs for; security, manageability and orchestration is imperative. For a ‘white box solution’ this can be difficult and can only be achieved through close collaboration between the OS vendor, hardware vendor, the CPU manufacturer and application vendors, and can be problematic since none of these has likely been purpose built or tested for your networking applications.

In terms of physical interfaces, white-boxes typically do not offer features such as Power over Ethernet (PoE). This is highly attractive because many IoT sensors rely on this PoE. In addition, branches often also require WAN interfaces such as 4G LTE, essential not only for backup or load sharing, but also as a transport option for SD-WAN architectures. Also some locations may require legacy TDM links too, so it is important to deploy platforms having the flexibility to support more than simple Ethernet.

Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco Tutorials and Materials

Figure 1 – Table of Pros and Cons

Deploy Cisco SD-Branch Platforms with Confidence


Cisco has developed purpose built hardware platforms for the SD-Branch running an OS and hypervisor (NFVIS) that is custom built for networking services and avoids the pitfalls of a generic x86 based Server or “white box” solution. The NFVIS implementation is designed for high levels of up-time by adopting a hardened Linux kernel and embedding drivers and low-level accelerations that can take advantage of modern CPU features such as Single-Root Input/Output Virtualization (SR-IOV), for plumbing high speed interfaces directly into virtual network functions. Also security is burned-in, simplifying day-zero installations with plug-and-play, and ensuring that only trusted applications and services will boot up and run inside your network.

Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco Tutorials and Materials

Figure 2 – Cisco UCS E module for ISR 4000 Series and ENCS 5000 Series platforms

Features and advantages of ENCS 5000 Series, ISR 4000 Series with UCS E module and NFVIS are:

◈ Designed for Enterprise deployments and targeted for simplification for networking teams
◈ Optimized for the deployment and monitoring of Virtual Network Functions
◈ On-demand services with; plug and play and zero touch deployment
◈ Secure and trusted infrastructure software
◈ Security tested and certified

Cisco SD-Branch enables agile, on-demand service and centralized orchestration for integrating the new service into the existing ones. Enterprises and service providers gain the ability to choose “best of breed” VNFs to implement a particular service. By using SD-Branch, you can spawn virtual devices to scale to new feature requirements.  For example, deploy the Cisco ENCS 5000 series as a single platform and virtualize of all your SD-Branch services, or with your existing ISR branch router you have an option of inserting a server blade and spawn up a SD-Branch element that provides additional security functionality or running multiple VNFs, service chained together for routing, security, WAN optimization, unified communications, etc.  Similarly, SD-WAN can be deployed as an integral part of the routing VNF with a centrally automated and orchestrated management system.

Cisco’s Digital Network Architecture (DNA) provides the proven and trusted SD-Branch hardware, software and management building blocks to achieve the simplicity and flexibility required by CIOs and IT managers in today’s digital business landscape – here is a whitepaper, which dives deeper into this design guidance

Trusted Cisco Network Services


The Cisco SD-Branch solution offers an open environment for the virtualization of both network functions and applications in the enterprise branch. Both Cisco and third-party VNFs can be on-boarded onto the solution.  Applications running in a Linux or Windows environment can also be instantiated on top of NFVIS and can be supported by DNA Center and the DNA Controller.

Some network functions that Cisco offers in a virtual form factor include:

◈ Cisco Integrated Services Virtual Router (ISRv) for virtual routing
◈ Cisco vEdge Router (vEdge) for virtual SD-WAN routing
◈ Cisco Adaptive Security Virtual Appliance (ASAv) for a virtual firewall
◈ Cisco Firepower™Next-GenerationFirewall Virtual (NGFWv) for integrated firewall and intrusion detection and prevention (IPS and IDS)
◈ Cisco Virtual Wide Area Application Services (vWAAS) for virtualized WAN optimization
◈ Cisco Virtual Wireless Controller (vWLC) for a virtualized wireless LAN controller

Third Party Open Ecosystem


Cisco’s open ecosystem approach for the SD-Branch allows other vendors to submit their VNFs for certification to help ensure compatibility and interoperability with the Cisco SD-Branch infrastructure. As a customer deploying Cisco’s SD-Branch solution with certified VNFs, you can be confident that the solution will successfully deploy, run, and interoperate with Cisco’s own suite of VNFs.

Some currently certified vendors and VNFs include:

◈ ThousandEyes – network intelligence platform
◈ Fortinet – FortiGate next generation firewall
◈ Palo Alto Networks – Next generation firewall
◈ Citrix Netscaler VPX – Application delivery controller (ADC)
◈ InfoVista Ipanema – SDWAN
◈ Ctera – Enterprise NAS/file services

Many more third party VNFs are now under test for certification.