Showing posts with label Cloud Computing. Show all posts
Showing posts with label Cloud Computing. Show all posts

Saturday, 23 December 2023

Cisco and Nutanix Team Up in Response to Customer Demand: Another Win for Customer-Centric Innovation

In the ever-evolving landscape of IT, organizations continually seek solutions that simplify complexity, break down silos, and enhance agility. At Cisco, we’re continually tuned into the demands and requirements of our customer base, and it’s this laser focus that has led to our most recent collaborative venture. We are thrilled to announce our new integration with Nutanix, a leader in enterprise cloud computing solutions.

Listening to You: Our Driving Force


Time and time again, our commitment to delivering top-notch, efficient solutions is fueled by the needs and feedback of our customers. You spoke, and we listened. The partnership with Nutanix is a direct reflection of this two-way dialogue, a testament to our commitment to not just hear, but actively listen and respond to what you are saying.

Bridging the Gap with ACI VMM Integration


One of the key facets of this collaboration is the integration of Cisco’s Application Centric Infrastructure (ACI) Virtual Machine Manager (VMM) with Nutanix. This marriage of technologies effectively bridges domain silos between the network and server teams. Network configurations and server deployments, historically segmented tasks, can now be coordinated more efficiently, fostering a more agile and responsive infrastructure. This integration is designed to simplify operational complexities, promoting a more streamlined and efficient operational workflow.


Cisco ACI: Beyond Traditional Networking


Before we jump into the integration, let’s re-familiarize ourselves with Cisco ACI:

◉ APIC (Application Policy Infrastructure Controller): It’s not just a management tool; think of it as the brain behind the orchestration of network policies.
◉ Spine and Leaf Architecture: This ensures a swift and efficient flow of data, connecting all aspects of the data center seamlessly.
◉ Policies: The linchpin of ACI, these pre-defined functionalities ensure the network is adaptive and responsive to specific needs.

Why Nutanix?


Nutanix is a frontrunner when it comes to hyperconverged infrastructure, bringing together compute, storage, and virtualization under one roof. Their solution, which focuses on simplicity and scalability, offers an ideal playground for ACI’s capabilities. Integrating with Nutanix’s VMM functions ensures that ACI’s policy-driven approach aligns perfectly with the agility and dynamism of virtualized workloads.

Cisco and Nutanix Team Up in Response to Customer Demand: Another Win for Customer-Centric Innovation

The Power of Integration


Holistic Visibility: ACI’s already granular insight extends into Nutanix environments. Network administrators can track activities from the physical infrastructure up to individual VMs in the Nutanix cluster.

Elastic Networking: As virtual machines and workloads shift within the Nutanix ecosystem, ACI adapts, ensuring network policies remain consistent and effective.

Enhanced Security Posture: ACI’s renowned micro-segmentation, when combined with Nutanix’s security features, offers a formidable defense against malicious activities and breaches.

Unified Management: With APIC interfacing directly with Nutanix’s Prism management, it consolidates the administrative experience, simplifying operations.

Getting Started with ACI and Nutanix


Integration at a glance:

  • Kickstart with a robust ACI environment and an operational Nutanix cluster.
  • Through APIC, navigate to VM Networking, and add a VMM domain specific to Nutanix.
  • Detail out the Nutanix cluster specifications and correlate with ACI’s bridge domain.
  • Watch as ACI seamlessly integrates its policies with Nutanix, creating a cohesive networking environment.

Joint Commitment to Customer Success


Both Cisco and Nutanix are firmly committed to jointly supporting our customers. Our shared goal is to deliver the best infrastructure automation experience possible. By harmonizing the strengths of ACI’s policy-driven architecture with Nutanix’s prowess in hyperconverged infrastructure, we aim to offer a solution that epitomizes efficiency, simplicity, and most importantly, customer satisfaction.

In Conclusion

The integration of Cisco’s ACI with Nutanix marks a pivotal moment in data center networking. It signifies a future where the physical and virtual, the network and the application, are in perfect harmony. For enterprises looking for agility, security, and simplicity, this integration opens up new vistas of possibilities.

Source: cisco.com

Tuesday, 1 November 2022

Introducing Cisco Cloud Network Controller on Google Cloud Platform – Part 1

This year has been quite significant for Cisco’s multicloud networking software evolution. Earlier in the year Cisco introduced, along with other exciting software features announcements, Google Cloud Platform (GCP) support for Cisco Cloud Network Controller (CNC), formerly known as Cisco Cloud APIC. This blog series introduces the GCP support capabilities subdivided into three parts:

Part 1: Native Cloud Networking Automation
Part 2: Contract-based Routing and Firewall Rules Automation
Part 3: External Cloud Connectivity Automation

The Need for Multicloud Networking Software


While organizations are increasingly becoming more mature with their to the cloud strategies, lately there has been a shift in focus to in the cloud networking, as also observed by Gartner in their first Market Guide for Cloud Networking Software and subsequent releases. This series will show how a cloud-like policy model can help addressing inside the cloud challenges with the aim to keep improving operations in public cloud environments and augmenting native cloud networking capabilities, as needed.

High Level Architecture


Google Cloud resources are organized hierarchically, and the Project level is the most relevant from the Cisco CNC perspective as a tenant is mapped one-to-one to a GCP project. Cisco CNC is deployed from the Google Cloud Marketplace into a dedicated infra VPC (Virtual Private Cloud) contained within a project mapped to the infra tenant, while user VPCs are provisioned in dedicated or shared projects associated to their own tenants within the Cisco CNC.

The Cisco CNC architecture on GCP is similar to that of AWS and Azure, as it also supports BGP IPv4 or BGP EVPN to on-premises or other cloud sites using Cisco Cloud Router (CCR) based on Cisco Catalyst 8000v. It also supports native GCP Cloud Router with Cloud VPN gateway for external connectivity. As for internal cloud connectivity, it leverages VPC Network Peering between user VPCs within the same or across regions as illustrated on the diagram below.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

Native Cloud Networking Automation


A brief overview of the Cisco CNC GUI before proceeding. The left side of the GUI contains the navigation pane which can be expanded for visualization of cloud resources or configuration. The application management tab is where one can go to make configurations, or alternatively, use the blue intent icon at the top right which provides easy access to various configuration options.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

To demonstrate how Cisco CNC automates inter-region routing across VPCs, let’s build a simple scenario with two VPCs in different regions contained within the same user-tenant project called engineering. Note that the same scenario could exist with these two VPCs in the same region, as VPC networks in GCP are global resources and not associated to any region, unlike subnets which are regional resources.

Provisioning VPC Networks and Regional Subnets

The first step is to create a Tenant and map it to a GCP Project as depicted below. The access type is set to Managed Identity, which allows Cisco CNC to make changes to user-tenant projects by means of a pre-provisioned service account during the initial deployment.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

The configuration below illustrates the creation of two Cloud Context Profiles used as a mapping tool for a VPC. It is contained within a Tenant and provides the region association to determine which region(s) a VPC gets deployed to, along with regional subnets. Additionally, a Cloud Context Profile is always associated to a logical VRF.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network
Profile for vpc-1

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network
Profile for vpc-2

By creating these two profiles and mapping to VPCs in different regions, each with their respective CIDR and subnet(s), the Cisco CNC translates them into native constructs in the Google Cloud console under VPC networks as seen below. Note that the VRF name defines the name of the VPC, in this example, network-a and network-b.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

Cisco CNC GUI provides the same level of visibility, under Application Management where additional VPCs can be created or under Cloud Resources.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

Route Leaking Between VPCs

For this scenario, a route leak policy is configured to allow inter-VRF routing which is done independently of contract-based routing or security policies to be reviewed on part 2 of this blog series. As seen previously, the VRF association to a particular VPC is done within the Cloud Context Profile.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

While the “Add Reverse Leak Route” option is not depicted for brevity, it is also enabled to allow for bi-directional connectivity. In this scenario, since it is only inter-VPC route leaking, VRFs are labeled as internal and all routes are leaked.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

In the GCP console, it automates VPC network peering between network-a and network-b with proper imported and exported routes.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

Peering routes are auto generated for both VPCs, along with default routes automated during VPC setup.

Google Cloud Platform, Cisco Cloud, Cisco Career, Cisco Jobs, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Preparation, Cisco Network

Monday, 16 March 2020

Setting a simple standard: Using MQTT at the edge

Cisco Prep Exam, Cisco Tutorial and Material, Cisco Learning, Cisco Career, Cisco Cloud

I shared examples of how organizations can benefit from edge computing – from enabling autonomous vehicles in transportation and preventive maintenance in manufacturing to streamlining compliance for utilities. I also recently shared examples on where the edge really is in edge computing. For operational leaders, edge compute use cases offer compelling business advantages. For IT leaders, such use cases require reliable protocols for enabling processing and transfer of data between applications and a host of IoT sensors and other devices. In this post, I’d like to explore MQ Telemetry Transport (MQTT) and why it has emerged as the best protocol for IoT communications in edge computing.

What is MQTT?


MQTT is the dominant standard used in IoT communications. It allows assets/sensors to publish data, for example, a weather sensor can publish the current temperature, wind metrics, etc. MQTT also defines how consumers can receive that data. For example, an application can listen to the published weather information and take local actions, like starting a watering system.

Why is MQTT ideal for edge computing?


There are three primary reasons for using this lightweight, open-source protocol at the edge. Because of its simplicity, MQTT doesn’t require much processing or battery power from devices. With the ability to use very small message headers, MQTT doesn’t demand much bandwidth, either. MQTT also makes it possible to define different quality of service levels for messages – enabling control over how many times messages are sent and what kind of handshakes are required to complete them.

How does MQTT work?


The core of the MQTT protocol are clients and servers that send many-to-many communications between multiple clients using the following:

◉ Topics provide a way of categorizing the types of message that may be sent. As one example, if a sensor measures temperature, the topic might be defined as “TEMP” and the sensor sends messages labeled “TEMP.”

◉ Publishers include the sensors that are configured to send out messages containing data. In the “TEMP” example, the sensor would be considered the publisher.

◉ In addition to transmitting data, IoT devices can be configured as subscribers that receive data related to pre-defined topics. Devices can subscribe to multiple topics.

◉ The broker is the server at the center of it all, transmitting published messages to servers or clients that have subscribed to specific topics.

Why choose MQTT over other protocols?


HTTP, Advanced Message Queuing Protocol (AMQP) and Constrained Application Protocol (CoAP) are other potential options at the edge. Although I could write extensively on each, for the purposes of this blog, I would like to share some comparative highlights.

A decade ago, HTTP would have seemed the obvious choice. However, it is not well suited to IoT use cases, which are driven by trigger events or statuses. HTTP would need to poll a device continuously to check for those triggers – an approach that is inefficient and requires extra processing and battery power. With MQTT, the subscribed device merely “listens” for the message without the need for continuous polling.

The choice between AMQP and MQTT boils down to the requirements in a specific environment or implementation. AMQP offers greater scalability and flexibility but is more verbose; while MQTT provides simplicity, AMQP requires multiple steps to publish a message to a node. There are some cases where it will make sense to use AMQP at the edge. Even then, however, MQTT will likely be needed for areas demanding a lightweight, low-footprint option.

Finally, like MQTT, CoAP offers a low footprint. But unlike the many-to-many communication of MQTT, CoAP is a one-to-one protocol. What’s more, it’s best suited to a state transfer model – not the event-based model commonly required for IoT edge compute.

These are among the reasons Cisco has adopted MQTT as the standard protocol for one of our imminent product launches. Stay tuned for more information about the product – and the ways it enables effective computing at the IoT edge.

Sunday, 5 January 2020

Next Generation Data Center Design With MDS 9700 – Part III

This week is exciting, had opportunity to sit on round table with Cisco’s largest customers on an open ended architecture discussion and their take on past, present and future. More on that some other time let’s pick up last critical aspect of High Performance Data Center design namely flexibility. Customers need flexibility to adapt to changing requirements over time as well as to support diverse requirements of their users. Flexibility is not just about protocol, although protocol is very important aspect, but it is also about making sure customers have choice to design, grow and adapt their DC according to their needs. As an example if customers want to utilize the time to market advantage and ubiquity of Ethernet they can by adopt FCoE.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Moreover flexibility has to be complemented by seamless integration where customers can not only mix and match the architectures/protocols/speeds but also evolve from one to other over time with minimal disruption and without forklift upgrades. Investment protection of more than a decade on Cisco director switches allows customer to move to higher speeds, or adopt new protocols using the existing chassis and fabric cards. Finally any solution should allow scalability over time with minimal disruptions and common management model. As an example on MDS 9710 or MDS 9706 customers can choose to use 2/4/8 G FC, 4/8/16G FC, 10G FC or 10G FCoE at each hop.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Let’s review each aspect of flexibility at a time.

Architecture:

Cisco SAN product family is designed to support Architecture flexibility. From smallest to  the largest customers and everything in-between.  Customers can grow from 12 16G ports to 48 ports on a single 9148S. They can grow from 48 16G Line Rate Ports to 192 16G Line Rate with MDS 9710 and upto 384 ports on MDS 9710. Finally having seamless FC and FCoE capability allows customers to use these directors as edge or core switches . With the industry leading scalability numbers, customers can scale up or scale out as per their needs. Two examples show how customers can use Director class switches (9513, 9506, 9710 or 9706) based Architecture for End of Row designs. Similarly customers can orchestrate Top of Rack designs using Nexus fixed family or MDS 9148S.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

If they want to continue with FC for foreseeable future or have sizable FC infrastructure that they want to leverage (and have option to go to FCOE) then MDS serves their needs. Similarly they can support edge core designs, and edge core edge designs or even collapsed  cores if so desired.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

If customers need converged switch then Nexus 2K, 5K and 6K provides the flexibility, ability to collapse two networks, simplify management as shown in the picture below.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Speeds

Customers can mix and match the FC speeds 2G/4G/8G, 4G/8G/16G on the latest MDS 9148S, and MDS 9700 product family. With all the major optics supported, customers can pick and choose optics for the smallest distance to long distance CWDM and DWDM solutions in addition to SW, LW and ER optics choices. In addition MDS 9700 supports 10GE optics running 10G FC traffic for ease of implementing 10G DWDM solutions based on ubiquitous 10GE circuits.

Protocol

FC is a dominant protocol with DC but at the same time a lot of customers are adopting FCoE to improve ROI, simplify the network or simply to have higher speeds and agility. Irrespective of the needs and timeline MDS solution allows customer to adopt FCoE today or down the road without forklift upgrades on the existing MDS 9700 platforms while leveraging the existing FC install base.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

The diagram above shows how customers can collapse LAN and SAN networks on the edge into one network. The advantage of FEX include reduced TCO, simplified operations (Parent switch provides a single point of management and policy enforcement and Plug-and-play management includes auto-configuration).

Another example to allow non transition less disruptive for customers Cisco has supported the BiDi optics on the Nexus product family. This allows customers to use the the same same OM2, OM3 and OM4 fabrics for 40G FCoE connectivity and still don;t have to rip and replace cabling plant.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

For customer who are not ready to converge networks but want to achieve faster time to market, higher performance, Ethernet scale economies can use separate LAN and SAN network and use FCoE for that dedicated SAN .

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

Coupled with broad Cisco product portfolio means that customers have the maximum flexibility to tune the architecture precisely to their needs. Cisco product portfolio is tightly integrated, all the SAN switches use same NxOS and DCNM provides seamless manageability across LAN, SAN, Converged infrastructure to Fabric Interconnects on UCS.

Cisco Prep, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Certifications

From the last 3 blogs lets quickly capture what are the unique characteristics of MDS 9700 that allows for High Performance Scalable Data Center Design.

◉ Performance

24 Tbps Switching capacity, line rate 16g FC ports, No Oversubscription, local switching or bandwidth allocation.

◉ Reliability

Redundancy for every critical component in the chassis including Fabric Card. Data Resiliency with CRC check and Forward Error Correction. Multiple level of CRC checks, smaller failure domains.

Friday, 3 January 2020

Next Generation Data Center Design With MDS 9710 – Part II

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep

EMC World was wonderful. It was gratifying to meet industry professionals,  listen in on great presentations and watch the demos for key business enabling technologies that Cisco, EMC and others have brought to fruition.  Its fascinating to see the transition of DC from cost center to a strategic business driver . The same repeated all over again at Cisco Live. More than 25000 attendees, hundreds of demos and sessions. Lot of  interesting customer meetings and MDS continues to resonate. We are excited about the MDS hardware that was on the display on show floor and interesting Multiprotocol demo and a lot of interesting SAN sessions.

Outside these we recently did a webinar on how Cisco MDS 9710 is enabling High Performance DC design with customer case studies.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
So let’s continue our discussion. There is no doubt when it comes to High Performance SAN switches there is no comparable to Cisco MDS 9710. Another component that is paramount to a good data center design is high availability. Massive virtualization, DC consolidation and ability to deploy more and more applications on powerful multi core CPUs has increased the risk profile within DC. These DC trends requires renewed focus on availability. MDS 9710 is leading the innovation there again. Hardware design and architecture has to guarantee high availability. At the same time, it’s not just about hardware but it’s a holistic approach with hardware, software, management and right architecture. Let me give you some just few examples of the first three pillars for high reliability and availability.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
MDS 9710 is the only director in the industry that provides Hardware Redundancy on all critical components of the switch, including fabric cards. Cisco Director Switches provide not only CRC checks but ability to drop corrupted frames. Without that ability network infrastructure exposes the end devices to the corrupted frames. Having ability to drop the CRC frames and quickly isolate the failing links outside as well as inside of the director provides Data Integrity and fault resiliency. VSAN allows fault isolation, Port Channel provides smaller failure domains, DCNM provides rich feature set for higher availability and redundancy. All of these are but a subset of examples which provides high resiliency and reliability.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
We are proud of the 9500 family and strong foundation for reliability and availability that we stand on. We have taken that to a completely new level with 9710. For any design within Data center high availability  has to go hand in hand with consistent performance. One without the other doesn’t make sense. Right design and architecture with DC as is important as components that power the connectivity. As an example Cisco recommend customers to distribute the ISL ports of an Port Channel across multiple line cards and multiple ASICs. This spreads the failure domain such that any ASIC  or even line card failures will not impact the port channel connectivity between switches and no need to reinitiate all the hosts logins. At part of writing this white paper ESG tested the Fabric Card redundancy in addition to other features of the platform. Remember that a chain is only as strong as its weakest link.

Cisco Tutorial and Materials, Cisco Learning, Cisco Study Materials, Cisco Online Exam, Cisco Prep
The most important aspect for all of this is for customer is to be educated.

Ask the right questions. Have in depth discussions to achieve higher availability and consistent performance. Most importantly selecting the right equipment, right architecture and best practices means no surprises.

We will continue our discussion for the Flexibility aspect of MDS 9710.

Thursday, 2 January 2020

Next Generation Data Center Design With MDS 9710 – Part I

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep

Data centers are undergoing a major transition to meet higher performance, scalability, and resiliency requirements with fewer resources, smaller footprint, and simplified designs. These rigorous requirements coupled with major data center trends, such as virtualization, data center consolidation and data growth, are putting a tremendous amount of strain on the existing infrastructure and adding complexity. MDS 9710 is designed to surpass these requirements without a forklift upgrade for the decade ahead.

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
MDS 9700 provides unprecedented

◉ Performance – 24 Tbps Switching capacity

◉ Reliability – Redundancy for every critical component in the chassis including Fabric Card

◉ Flexibility – Speed, Protocol, DC Architecture

In addition to these unique capabilities MDS 9710 provides the rich feature set and investment protection to customers.

In this series of blogs I plan to focus on design requirements of the next generation DC with MDS 9710. We will review one aspect of the DC design requirements in each. Let us look at performance today. A lot of customers how MDS 9710 delivers highest performance today. The performance that application delivers depend

◉ Throughput

◉ Latency

◉ Consistency

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
Switching infrastructure should provide line rate, non-blocking, high speed throughput to effectively power applications like VDI, High Performance Computing, High Frequency Trading, Big Data among others. Crutches like local switching, per port bandwidth allocation and oversubscription result in inflexible and complex design that breaks down every few years resulting in fork lift upgrades or running the DC design at sub-par performance levels.

Applications need both high through put and consistent latency. The switching latency is usually orders of magnitude less than that of the rest of the components in the data path. Thus the performance that applications can deliver is based on the end to end latency of the data path.

For both throughput and latency the most important factor that is often overlooked is consistency. Throughput and low latency should be consistent and independent of switching traffic profiles, network connectivity and traffic load .

Cisco Tutorial and Material, Cisco Learning, Cisco Guide, Cisco Online Exam, Cisco Prep
MDS 9700 allows for high performance DC design with

◉ 3X the performance of any director class switch

◉ Line Rate, Non Blocking Performance without limitations

◉ Consistent throughput and latency

Key Cisco innovations like Central Arbiter, Crossbar, Virtual Output Queues enable the consistent low latency and high throughput independent of the traffic profile or load on the chassis. Performance without high availability or data reliability is not good throughput.

Wednesday, 1 January 2020

MDS 9700 Scale Out and Scale Up

This is the final part on the High Performance Data Center Design. We will look at how high performance, high availability and flexibility allows customers to scale up or scale out over time without any disruption to the existing infrastructure. MDS 9710 capabilities are field proved with the wide adoption and steep ramp within first year of the introduction. Furthermore Cisco has not only established itself as a strong player in the SAN space with so many industry’s first innovations like VSAN, IVR, FCoE, Unified Ports that we introduced in last 12 years, but also has the leading market share in SAN.

Before we look at some architecture examples lets start with basic tenants any director class switch should support when it coms to scalability and supporting future customer needs

◉ Design should be flexible to Scale Up (increase performance) or Scale Out (add more port)

◉ The process should not be disruptive to the current installation for cabling, performance impact or downtime

◉ The design principals like oversubscription ratio, latency, throughput predictability (as an example from host edge to core) shouldn’t be compromised at port level and fabric level

Lets take a scale out example, where customer wants to increase 16G ports down the road. For this example I have used a core edge design with 4 Edge MDS 9710 and 2 Core MDS 9710. There are 768 hosts at 8Gbps and 640 hosts running at 16Gbps connected to 4 edge MDS 9710 with total of 16 Tbps connectivity. With 8:1 oversubscription ratio from edge to core design requires 2 Tbps edge to core connectivity. The 2 core systems are connected to edge and targets using 128 target ports running at 16Gbps in each direction. The picture below shows the connectivity.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Down the road data center requires 188 more ports running at 16G. These 188 ports are added to the new edge director (or open slots in the existing directors) which is then connected to the core switches with 24 additional edge to core connections. This is repeated with 24 additional 16G targets ports. The fact that this scale up is not disruptive to existing infrastructure is extremely important. In any of the scale out or scale up cases there is minimal impact, if any, on existing chassis layout, data path, cabling, throughput, latency. As an example if customer doesn’t want to string additional cables between the core and edge directors then they can upgrade to higher speed cards (32G FC or 40G FCoE with BiDi ) and get double the bandwidth on the on the existing cable plant.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Lets look at another example where customer wants to scale up (i.e. increase the performance of the connections). Lets use a edge core edge design for this example. There are 6144 hosts running at 8Gbps distributed over 10 edge MDS 9710s resulting in a total of 49 Tbps edge bandwidth. Lets assume that this data center is using a oversubscription ratio of 16:1 from edge into the core. To satisfy that requirement administrator designed DC with 2 core switches 192 ports each running at 3Tbps. Lets assume at initial design customer connected 768 Storage Ports running at 8G.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Few years down the road customer may wants to add additional 6,144 8G ports and keep the same oversubscription ratios. This has to be implemented in non disruptive manner, without any performance degradation on the existing infrastructure (either in throughput or in latency) and without any constraints regarding protocol, optics and connectivity. In this scenario the host edge connectivity doubles and the edge to core bandwidth increases to 98G. Data Center admin have multiple options for addressing the increase core bandwidth to 6 Tbps. Data Center admin can choose to add more 16G ports (192 more ports to be precise) or preserve the cabling and use 32G connectivity for host edge to core and core to target edge connectivity on the same chassis. Data Center admin can as easily use the 40G FCoE at that time to meet the bandwidth needs in the core of the network without any forklift. 

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

Or on the other hand customer may wants to upgrade to 16G connectivity on hosts and follow the same oversubscription ratios. . For 16G connectivity the host edge bandwidth increases to 98G and data center administrator has the same flexibility regarding protocol, cabling and speeds.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

For either option the disruption is minimal. In real life there will be mix of requirements on the same fabric some scale out and some scale up. In those circumstances data center admins have the same flexibility and options. With chassis life of more than a decade it allows customers to upgrade to higher speeds when they need to without disruption and with maximum flexibility. The figure below shows how easily customers can Scale UP or Scale Out.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Prep, Cisco Online Exam

As these examples show Cisco MDS solution provides ability for customers to Scale Up or Scale out in flexible, non disruptive way.

Saturday, 21 December 2019

Why Upgrade to MDS 9700

MDS 9500 family has supported customers for more than a decade helping them  through FC speed transitions from 1G, 2G, 4G, 8G and 8G advanced without forklift upgrades. But as we look in the future the MDS 9700 makes more sense for a lot of data center designs.  Top four reasons for customers to upgrade are

1. End of Support Milestones
2. Storage Consolidation
3. Improved Capabilities
4. Foundation for Future Growth

So lets look at each in some detail.

1. End of Support Milestones


MDS 4G parts are going End of Support on Feb 28th 2015. Impacted part numbers are DS-X9112, DS-X9124, DS-X9148. You can use the MDS 9500 Advance 8G Cards or MDS 9700 based design. Few advantages MDS 9700 offers over any other existing options are

a. Investment Protection – For any new Data Center design based on MDS 9700 will have much longer life than MDS 9500 product family. This will avoid EOL concerns or upgrades in near future. Thus any MDS 9700 based design will provide strong investment protection and will also ensure that the architecture is relevant for evolving data center needs for more than a decade.

b. EOL Planning – With MDS 9700 based design you control when you need to add any additional blades but with MDS 9500, you will have to either fill up the chassis within 6 months (End of life announcement to End of Sales) or leave the slots empty forever after End of Sale date.

c. Simplify Design – MDS 9700 will allow single skew, S/W version, consistent design across the whole fabric which will simplify the management. MDS 9700 massive performance allows for consolidation and thus reducing footprint and management burden.

d. Rich Feature Set – Finally as we will see later MDS provides host of features and capabilities above and beyond MDS 9500 and that enhancement list will continue to grow.

Cisco Certifications, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam

2. Storage Consolidation


MDS 9700 provides unprecedented consolidation compared to the existing solutions in the industry. As an example with MDS 9710 customers can use the 16G Line Rate ports to support massively virtualized workload and consolidate the server install base. Secondly with 9148S as Top of Rack switch and MDS 9700 at Core, you can design massively scalable networks supporting consistent latency and 16G throughput independent of the number of links and traffic profile and will allow customers to Scale Up or Scale Out much more easily than legacy based designs or any other architecture in the industry.

Moreover as shown in figure above for customers with MDS 9500 based designs MDS 9710 offers higher number of line rate ports in smaller footprint and much more economical way to design SANs. It also enables consolidation with higher performance as well as much higher availability.

Cisco Certifications, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam

3. Improved Capabilities


MDS 9700 design provides more enhanced capabilities above and beyond MDS 9500 and many more capabilities will be added in future. Some examples that are top of mind are detailed below

Availability: MDS 9700 based design improves the reliability due to enhancements on many fronts as well as simplifying the overall architecture and management.

◉ MDS 9710 introduced host of features to improve reliability like industry’s first N+1 Fabric redundancy, smaller failure domains and hardware based slow drain detection and recovery.

◉ Its well understood that reliability of any network comes from proper design, regular maintenance and support. It is imperative that Data Center is on the recommended releases and supported hardware. As an example data center outage where there are unsupported hardware or software version failure are exponentially more catastrophic as the time to fix those issues means new procurement and live insertion with no change management window. Cost of an outage in an Data Center is extremely high so it is important to keep the fabric upgraded and on the latest release with all supported components. Thus for new designs it makes sense that it is based on the latest MDS 9700 directors, as an example, rather than MDS 9513 Gen-2 line cards because they will fall of the support on Feb 28, 2015. Also a lot of times having different versions of the hardware and different software versions add complexity to the maintenance and upkeep and thus has a direct impact on the availability of the network as well as operational complexity.

Throughput:

With massive amounts of virtualization the user impact is much higher for any downtime or even performance degradation. Similarly with the data center consolidation and higher speeds available in the edge to core connectivity more and more host edge ports are connected through the same core switches and thus higher number of apps are dependent on consistent end to end performance to provide reliable user experience. MDS 9700 provides industries highest performance with 24Tbps switching capability. The Director class switch is based on Crossbar architecture with Central Arbitration and Virtual Output Queuing which ensures consistent line rate 16G throughput independent of the traffic profile with all 384 ports operating at 16G speeds and without using crutches like local switching (muck akin to emulating independent fixed fabric switches within a director), oversubscription (can cause intermittent performance issues) or bandwidth allocation.

Latency:

MDS Directors are store and forward switches this is needed as it makes sure that corrupted frames are not traversing everywhere in the network and end devices don’t waste precious CPU cycles dealing with corrupted traffic. This additional latency hit is OK as it protects end devices and preserves integrity of the whole fabric. Since all the ports are line rate and customers don’t have to use local switching. This again adds a small latency but results in flexible scalable design which is resilient and doesn’t breakdown in future. These 2 basic design requirements result in a latency number that is slightly higher but results in scalable design and guarantees predictable performance in any traffic profile and provides much higher fabric resiliency .

Consistent Latency: For MDS directors latency is same for the 16G flow to when there are 384 16G flows going through the system. Crossbar based switch design, Central arbitration and Virtual Output Queuing guarantees that. Having a variable latency which goes from few us to a high number is extremely dangerous. So first thing you need to make sure is that director could provide consistent and predictable latency.

End to End latency: Performance of any application or solution is dependent on end to end latency. Just focusing on SAN fabric alone is myopic as major portion of the latency is contributed by end devices. As an example spinning targets latency is of the order of ms. In this design few us is orders of magnitude less and hence not even observable. With SSD the latency is of the order of 100 to 200 us. Assuming 150 us the contribution of SAN fabric for edge core is still less than 10%. Majority (90%) of the latency is end devices and saving couple of us in SAN Fabric will hardly impact the overall application performance but the architectural advantage of CRC based error drops and scalable fabric design will make provided reliable operations and scalable design.

Scalability:

For larger Enterprises scalability has been a challenge due to massive amount of host virtualization. As more and more VMs are logging into the fabric the requirement from the fabric to support higher flogins, Zones. Domains is increasing. MDS 9700 has industries highest scalability numbers as its powered by supervisor that has 4 times the memory and compute capability of the predecessor. This translates to support for higher scalability and at the same time provides room for future growth.

Cisco Certifications, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam

4. Foundation for Future Growth:


MDS 9700 provides a strong foundation to meet the performance and scalability needs for the Data Center requirements but the massive switching capability and compute and memory will cover your needs for more than a decade.

◉ It will allow you to go to 32G FC speeds without forklift upgrade or changing Fabric Cards (rather you will need 3 more of the same Fabric card to get line rate throughput through all the 384 ports on MDS 9710 (and 192 on MDS 9706).

◉ MDS 9700 allow customers to deploy 10G FCoE solution today and upgrade without forklift upgrade again to 40G FCoE.

◉ MDS 9700 is again unique such that customers can mix and match FC and FCoE line cards any way they want without any limitations or constraints.

Most importantly customers don’t have to make FC vs FCoE decision. Whether you want to continue with FC and have plans for 32G FC or beyond or if you are looking to converge two networks into single network tomorrow or few years down the road MDS 9700 will provide consistent capabilities in both architectures.

Cisco Certifications, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Online Exam

In summary SAN Directors are critical element of any Data Center. Going back in time the basic reason for having a separate SAN was to provide unprecedented performance, reliability and high availability. Data Center design architecture has to keep up with the requirements of new generation of application, virtualization of even the highest performance apps like databases, new design requirements introduced by solutions like VDI, ever increasing Solid State drive usage, and device proliferation. At the same time when networks are getting increasingly complex the basic necessity is to simplify the configuration, provisioning, resource management and upkeep. These are exact design paradigms that MDS 9700 is designed to solve more elegantly than any existing solution.

Thursday, 18 April 2019

Serverless in the Datacenter: FaaS on K8s at DevNet Create

If you’ve ever wanted to learn the fundamentals of serverless and get your hands dirty building a LAMP-like application, DevNet Create has a session for you. On both April 24 and April 25 from 11:45a to 12:30p, I’ll be running an exercise entitled “FONK: FaaS on K8s working examples.” During our 45 minutes together, we’ll build a serverless version of the Guestbook application, which is the “Hello World” of the Kubernetes (K8s) community. Only instead of using containers directly the way that the original Guestbook does, we’ll use a Function-as-a-Service (FaaS) runtime, an Object Storage service, and a NoSQL server all running on top of K8s.

What is FaaS on K8s?


Developers need some platform providing them with compute resources in digestible bites when designing applications. Simply put, a Function-as-a-Service (FaaS) runtime (such as AWS Lambda or Azure Functions) is to serverless application architecture as a container runtime is to a microservices architecture. A container runtime takes care of things like autoscaling, rolling updates, and name resolution of different services running within it. A FaaS runtime obscures details of the underlying container runtime that most use under the hood and provide developers with a cleaner experience that enables them to focus on their own business logic.

During the session, we’ll discuss the six most popular FaaS runtimes that run on top of K8s so that you can run serverless applications in your own datacenter instead of in the public cloud. The featured labs will let you get your hands on two of them: OpenFaaS and OpenWhisk.

The Environment We’ll Be Using


I’ll be spending my evening on April 23 using a DevNet Sandbox to set up the following environment for you:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

Each student will get a K8s cluster pre-configured with not only OpenWhisk and OpenFaaS runtimes but also an Object Storage service via Minio and a NoSQL server via MongoDB. An additional VM will be provided and preloaded with all the command line tools we’ll need to build an application as well. What does a web application look like when using this FONK design pattern?

Our End Goal: The FONK Guestbook


Instead of the traditional K8s Guestbook that uses three services and six persistent containers:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

we’ll instead use the FONK design pattern to build its serverless equivalent:

Cisco Tutorial and Material, Cisco Certifications, Cisco Learning, Cisco Guides

Minio will host our static HTML and Javascript files. Upon being loaded into a browser, the Javascript will make REST API calls to the API gateway provided by our FaaS runtime to launch functions on demand. When loaded into memory as needed, those functions will perform read and write operations from and to our MongoDB. The Javascript will then alter our HTML in the browser to reflect the changes to our user.

Friday, 2 June 2017

Transformation and the New Role of Managed Services

If you are a part of or even peripherally connected to an IT organization or managed services provider, you probably hear the word “transformation” daily, perhaps even more frequently.  Like other well-worn terms such as “Digitization,” “DevOps,” or “Agile,” transformation can mean a lot of different things depending on the specific organization, the team, or the individual person saying it or hearing it.  It’s a word that’s used so often that it can sometimes confuse rather than clarify discussions.  While not quite there yet, transformation threatens to enter the pantheon of over-used and “buzzy” corporate-speak that serves only as filler content to obfuscate specificity.