Sunday 29 November 2020

Cisco NX-OS VXLAN Innovations Part 2: Seamless Integration of EVPN(TRM) with MVPN

In today’s world, multicast senders and receivers are not limited to a single network. They can be spread across enterprise and data center locations. Multicast can be generated or consumed anywhere and can be present in various security contexts – be it a tenant of VXLAN EVPN -based data center or within a traditional IP multicast network.  

Applications expect transparency to the underlying transport architecture while security compliance demand segmentation.  Networks should enable seamless and secure connectivity without compromising security or performance. The Border Device interconnected multicast network domains are the focus of this innovation. Both the seamless integration of VXLAN EVPN with TRM (Tenant Routed Multicast) and MVPN (Multicast VPN); two flavors of the same kind. 

The Two-Node Approach

An integration in which each node acts as a border to their domain requires a two-node approach. This incurs both CapEx costs and operations burden for customers to manage two devices. The complexity is multiplied if the integration needs to happen between traditional multicast networks, VXLAN EVPN (multicast network), and MVPN networks.  

Cisco Prep, Cisco Tutorials and Material, Cisco Exam Prep, Cisco Certification

To keep OpEx and CapEx costs to a minimum, we need a simpler, single-node approach.  

We followed a step–by–step approach to provide a solution addressing all these challenges. 

◉ Cisco innovated Tenant Routed Multicast (TRM) as a first–shipped solution delivering Layer-3 multicast overlay forwarding in VXLAN EVPN networks with an Anycast Designated router (DR) for End-Points. 

◉ Cisco introduced Multicast VPN (Draft Rosen PIM/GRE) support on Cisco Nexus 3600-R and 9500-R as a steppingstone.   

Cisco NX-OS 9.3(5) release delivered seamless integration between EVPN(TRM) and MVPN (Draft Rosen). Since these edge devices have functions for both TRM as well as MVPN, they act as seamless hand-off nodes for forwarding multicast between VXLAN EVPN networks and MVPN network. 

Tenant Routed Multicast 

Cisco Tenant Routed Multicast (TRM) efficiently delivers overlay Layer-3 multicast traffic in a multi-tenant VXLAN BGP EVPN data center network. Cisco TRM is based on standards-based, next-gen multicast VPN control plane (ngMVPN) as described in IETF RFC 6513 and RFC 6514 plus the extensions posted as part of IETF “draft–bess–evpn–mvpn-seamless-interop“. In VXLAN EVPN fabric, every Edge-Device act as a Distributed IP Anycast Gateway for unicast traffic as well as a Designated Router (DR) for multicast. On top of achieving scalable unicast and multicast routing, multicast forwarding is optimized by leveraging IGMP snooping on every edge-device by sending traffic only to the interested receivers. 

TRM leverages Multicast Distribution Trees (MDT) in the underlying transport network and incurs multi-tenancy with VXLAN encapsulation. A default MDT is built per-VRF and individual multicast group addresses in the overlay is mapped to respective underlay multicast groups for efficient replication and transport. TRM can leverage the same multicast infrastructure as VXLAN BUM (Broadcast, Unknown Unicast, and Multicast) traffic. Even by leveraging the same infrastructure, Rendezvous-Point (RP), the Multicast groups for BUM, and MDT are separated. The combination of TRM and Ingress Replication is also supported.  In the overlay, TRM operates as fully distributed Overlay Rendezvous-Point (RP), with seamless RP presence on every edge-device. The whole TRM–enabled VXLAN EVPN fabric acts as a single Multicast Router.   

In multicast networks, multicast sources, receivers, and Rendezvous-point (RP) reside within the fabric, across sites, inside Campus locations or over the WAN network. TRM allows seamless integration with existing multicast networks regardless of whether the sources, receivers and RP are located. TRM allows tenant-aware external connectivity using Layer-3 physical or sub-interfaces.    

TRM Multi-Site – DCI with Multicast 

Multi-site architecture

Data and application growth compelled customers to look for scale-out data center architectures as one large fabric per location brought challenges in operation and fault isolation. To improve fault and operational domains, customers started building smaller compartments of fabrics with Multi-Pod and Multi-Fabric architectures. These fabrics are interconnected with the Data Center Interconnect (DCI) technologies. The complexity of interconnecting these various compartments prevented from the rollout of such concepts with the introduction of Layer–2 and Layer–3 extensions. With a single overlay domain (end-to-end encapsulation), Multi-Pod introduced challenges with scale, fate sharing, and operational restrictions. Although Multi-Fabric provided improvements over Multi-Pod by isolating both the control and the data plane, it introduced additional challenges and operational complexity with confused mixing of different DCI technologies to extend and interconnect the overlay domains.  

TRM Multi-site

For unicast traffic, VXLAN EVPN Multi-Site architecture was introduced to address the above concerns. It allows the interconnection of multiple distinct VXLAN BGP EVPN fabrics or overlay domains, new approaches to fabric scaling, compartmentalization, and DCI. At the DCI, Border Gateways (BGW) were introduced to retain the network control points for overlay network traffic. Organizations also have a control point to steer and enforce network extension within and beyond a single data center. 

 Further, the Multi-Site architecture was extended with TRM in NX-OS 9.3(1) for seamless communication between sources and receivers spread across multiple EVPN VXLAN networks. This enables them to leverage similar benefits as that of the VXLAN EVPN Multi-site architecture.   

Cisco Prep, Cisco Tutorials and Material, Cisco Exam Prep, Cisco Certification

Tenant Routed Multicast to MVPN  

Multicast VPN (Draft Rosen – PIM/GRE)

MVPN (PIM/GRE) Draft-Rosen IETF draft “draft-rosen-vpn-mcast-10“ is an extension of BGP/MPLS IPVPN[RFC4364] and, specifies the necessary protocols and procedures for support of IPv4 Multicast. Like unicast IP VPN, MVPN allows enterprises to transparently interconnect its private network across the provider backbone without any change in enterprise network connectivity and administration for streaming multicast data. 

The NX-OS 9.3(3) release introduced MVPN (PIM/GRE) support on Cisco Nexus 9000 (R-Series) and Nexus3000 Series switches (R-Series). 

Seamless integration between EVPN (TRM) and MVPN (Draft Rosen) 

Brand new in Cisco NX-OS 9.3(5), we introduced the seamless integration between TRM capable edge-devices with Multicast VPN networks. The functionality of VXLAN VTEP and MVPN PE is brought together on the Nexus 9500-R Series and Nexus 3600-R Series. In Border PE (a combination of VXLAN Border and MPLS PE), a border device plays a VTEP role in VXLAN EVPN(TRM) network and a PE role in the MVPN network. The gateway node enables packets to be handed off between a VXLAN network (TRM or TRM Multi-Site) and an MVPN network. It acts as a central node that performs necessary packet forwarding, encapsulation, and decapsulation to send multicast traffic to the respective receivers.  The rendezvous point (RP) for the customer (overlay) network can be in any of the three networks:  VXLAN, MVPN, or IP multicast. 

Customers reap the benefits of lower OpEx and CapEx costs with a single-node approach at the border for hand-off functionality.   

Cisco Prep, Cisco Tutorials and Material, Cisco Exam Prep, Cisco Certification

Customers achieve the benefits of standards-based data center fabric deployments using VXLAN EVPN technology – scalability, performance, agility, workload mobility, and security. As data cross multiple domains or boundaries, it becomes critical for customers to achieve similar benefits without increasing costs and operational complexity. Customers are looking for a simple, flexible, manageable approach to data center operations and Cisco’s single-box solution (both VXLAN EVPN(TRM) and MVPN function on the same device) offers operational flexibility to customers.

Saturday 28 November 2020

Cisco NX-OS VXLAN Innovations Part 1: Inter-VNI Communication Using Downstream VNI

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

In this blog, we’ll look closely at VXLAN EVPN Downstream VNI for intra-site and inter-site (Inter-VNI communication using Downstream VNI).

Segmentation is one of the basic needs for Multi-Tenancy. There are many different ways to segment,  be it with VLANs in Ethernet or VRFs in IP Routing use-cases. With Virtual Extensible LAN (VXLAN), segmentation becomes more scalable with over 16 million assignable identifiers called VNI (Virtual Network Identifier). Traditionally, VXLAN segments are assigned in a symmetrical fashion, which means it must be the same to allow communication. While this symmetric assignment is generally fine, there are use cases that could benefit from a more flexible assignment and the communication across VNIs. For example, Acquisition and Mergers or  Shared Services offerings.

During Acquisition and Mergers, it is pertinent to achieve a fast and seamless integration both for the business and the IT infrastructure. In the specific case of the IT infrastructure, we are aiming to integrate without any renumbering. This broken down to VXLAN, we want to provide inter-VNI communication.

In the case of Shared Services, many deployed segments are required to reach a common service like DNS, Active Directory or similar. These shared, or extranet, services are often front-ended with a firewall which avoids the need for inter-VNI communication. Nevertheless, there are cases where specific needs dictate transparent access to this extranet service and inter-VNI communication becomes critical.

There are different methods where inter-VNI communication is used. The most common cases with attached terminology are called VRF Route Leaking. In VRF Route Leaking, the goal is to bring an IP route from one VRF and transport or leak it, into a different VRF. Different needs are present in translation cases. For example,  when you want to represent a segment with a different identifier than what was assigned (think VLAN translation).

Downstream VNI assignment for VXLAN EVPN addresses inter-VNI communication needs, be it for communication between VRFs, or is it for use-cases of translating VNIs between Sites.

Use Case Scenarios

Downstream VNI for shared services provides the functionality to selectively leak routes between VRFs. By adjusting the configuration of the VRF Route-Targets (RT), you have the option to import IP prefixes into a different VRF. Downstream VNI assignment allows the egress VTEP (Downstream) to dictate the VNI used by the ingress VTEP (Upstream). This is to reach the network advertised by the egress VTEP, which would otherwise honor the configured VNI. Downstream VNI complements and completes the need for asymmetric VNI assignment and simplifies the communication between different VRF with different VNIs. For example, the Extranet/Shared Services scenario where a service (DNS Server) sitting in service VRF needs to share the services to all the hosts (servers in different VRFs). The Shared service VRF needs to a) import the multiple VRFs into its local VRF as well as should be b) able to support the disparate value of downstream VNI.

Similar as in the shared services use-case, Downstream VNI provides a method of Translating or Normalizing VNI assignments in a VXLAN EVPN Multi-Site deployment. Where traditionally the same VNIs have to be assigned across all the Sites, with Downstream VNI we can allow inter VNI communication on the Border Gateway (BGW). By aligning the Route-Target configuration between the BGW, Sites with different VNIs will be able to communicate. Exactly as explained for the prior use-case, the egress VTEP (Downstream) dictates the VNI to be used by the ingress VTEP (Upstream) For example, Normalization/Asymmetric VNI deployment scenario, when we are adding new Sites in VXLAN EVPN Multi-Site, on new Border Gateway (BGW), it may be desirable to use and stitch completely disparate values of VNIs.


Seamless Integration and Flexible Deployments. With Downstream VNI we have the opportunity for more seamless integration of disjoint networks with the same intent. As a result, a much more agile and time-saving approach is available. For use-cases where Extranet/Shared Service scenario exists, a more flexible deployment option exists with Downstream VNI.

How it works

1. Upon receiving a route update on the ingress VTEP (Upstream), the route is being installed with the advertised VNI from the egress VTEP (Downstream). In short, the prefix is installed with the Downstream VNI.

2. As a result, the egress VTEP dictates the VNI used by the ingress VTEP to reach the respective network advertisement done by egress VTEP. This way, the ingress VTEP uses the right VNI to reach the prefix advertised by the egress VTEP when forwarding data to this peer.

3. The process of Downstream VNI is achieved by the egress VTEP (Downstream) publishing the VNI via BGP control-plane protocol to other receiving VTEPs, which will use this downstream assigned VNI for the encapsulation instruction to send data to the egress VTEP. Data traffic will always be sent with the Downstream VNI assigned to a prefix and will override the otherwise honored configured VNI.

4. The egress VTEP dictates the VNI to be used by ingress VTEP by performing the downstream VNI assignment via the BGP EVPN control-plane protocol.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Exam Prep

In the above example, the VTEPs have disparate VNIs i.e. 50001 and 50002. If VLAN 20 with VRF-B needs to communicate to VLAN10 of VRF-A, the VTEP-1 (L3VNI 50001) will act as a Downstream VTEP and dictate VTEP-4 to use VNI 50001 to encapsulate the packets to reach VLAN 10 and vice-versa.

What’s Next?

Stay tuned for our next blogs which cover features and benefits for VXLAN EVPN based data center fabrics such as Loop detection and mitigation in VXLAN EVPN fabrics, deliver packets in secured fashion across VXLAN EVPN sites using CloudSec and seamless integration of multicast packet (TRM) with MVPN (Draft-Rosen).

Friday 27 November 2020

Bolstering Cyber Resilience in the Financial Services Industry: Part Two

Cisco Exam Prep, Cisco Tutorials and Material, Cisco Guides, Cisco Certification, Cisco Study Material, Cisco Career

As you read in part one of this blog, Cybersecurity threats have never been greater. It is imperative that your financial services organization is prepared to detect and combat even the most sophisticated cyber-attacks. Cybersecurity month brought this issue top of mind for so many in the financial services world, and now it is time to put the information into action.

Last week we starting discussing the five-point strategy to bolster cyber resilience. We walked through the first two points: Secure by Design and Zero Trust. Now let’s jump into the final three elements of this strategy.

Cisco Exam Prep, Cisco Tutorials and Material, Cisco Guides, Cisco Certification, Cisco Study Material, Cisco Career

#3) Third Party Cyber Risk Assessment

As financial services firms continue to strengthen their cyber resilience, cyber threat actors have been working hard to identify vulnerabilities both internal and external to the firm to gain access to financial data. Most financial services firms have a large ecosystem of partners (customer service, software development, equipment providers, media and internet marketing, etc.) external to the firm, who augment the firm’s products and services with their own and/or play a critical role in developing, deploying, or maintaining the firm’s products and services. These ecosystem partners are all connected to the firms network, have access to critical financial data, and are expected to comply with the firm’s risk and compliance policies. Our research has identified that “70% of Financial Third-Party Vendors have Unacceptable Compliance to Regulations” and “do not have a focus on Insider Threats and Patching”.

Cisco’s Third-Party Security Assessment Program provides financial services firms with proactive services to validate security posture within the firm’s third-party vendors and provides direction for improvement of systems, processes to each vendor, including relevant training and certification support.

#4) Security Awareness Training (Employee Training)

It’s become evident that, often, the weakest link in many cybersecurity defenses are people. In fact, according to the 2019 Gartner Magic Quadrant for Security Awareness Computer-Based Training, “People influence security more than technology or policy and cybercriminals know how to exploit human behaviors.”

So, while technology continues to evolve, the human element will always be the most unpredictable variable to secure. In order to fortify against people-enabled losses, financial services firms are turning to security awareness and training programs. Recent events have highlighted an increased need for security awareness, as the transition to a remote workforce has unveiled new, targeted threats that require employees to detect on their own.

Cisco Security Awareness is designed to help promote and apply effective cybersecurity common sense by modifying end-user behavior. Using engaging and relevant computer-based content with various simulated attack methods, this cloud-delivered product provides comprehensive simulation, training, and reporting so employee process can be continually monitored and tracked; an important part of compliance standards such as HIPAA and GDPR.

#5) Cyber Insurance

Financial services firms are at huge financial risk when a data breach occurs. To protect themselves from such an eventuality and in light of the emerging advancement in data theft and manipulation threats, it is imperative that they protect themselves with cyber insurance. Aside from providing financial cover, these cyber insurance providers also provide their customers with advanced notification of threats. Cisco is part of an industry-first offering partnering with Apple, Aon, and Allianz to bring together the key pieces needed to manage cyber risk: security technology, secure devices, cybersecurity domain expertise, and enhanced cyber insurance (select markets only).

Now What?

It is evident that there has never been a more pressing time to evaluate your cybersecurity strategy. Once you walk through the five-points above, here is one final checklist to ensure you are maximizing your cybersecurity strategy.

For a financial services firm to have a robust cyber resilient strategy:

1. The cybersecurity practices of their third party partners as well as their own have to be regularly reviewed, audited and continuously enhanced.

2. There must be a security-first mindset from the CEO down to every employee and partner in the organization.

3. Employee awareness and training sessions on cyber hygiene best practices must be held regularly to prevent exploitable vulnerabilities and help minimize the impact of any data breach.

4. Firms must collaborate with the financial services industry participants to share learnings, best practices, and develop industry wide cyber resilience strategies

Take these tips and the (above) five point cyber resilience strategy to ensure that you are doing everything you can to secure your financial services organization.

Thursday 26 November 2020

Enabling Integration via Webex Teams – All Together Now

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Exam Prep

Enabling Integration via Webex Teams and Cisco DNA, SD-Wan, Intersight, Thousand Eyes via Cloud API Gateway

I was really excited to have a unique opportunity to put together a team of my fellow engineers to work on a Collaboration hacking contest within Cisco. This annual event is usually in-person for a day or two in San Jose, making it out of reach for my nomadic desert comrades located in Arizona. This year, however, remote is the new normal. This unique situation made it possible for my ragtag band of misfits to participate in events regardless of our geography. So we embarked on a mission to enable webhook integration for Webex teams, so that our products can send notifications into Teams, just as they can into email.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Exam Prep

A cloud native yet cloud agnostic solution

In order to do this we decided to make sure this wasn’t only able to support diverse products, but also, diverse clouds. A cloud native, yet cloud agnostic solution based upon serverless infrastructure supporting standard webhooks and HTTPS Post messages. We decided on Google Cloud platform and Amazon Web Services for our multi cloud endeavor.

The initial idea was actually for a separate use case – I have esp8266 modules integrated with Teams for the use case of being notified when my garage door is opened/closed, my bearded dragon’s cage is hot, etc. As these scale in number, if I ever were to change my security bot token or room ID, I would have to go re-flash all of my IoT Sensors to match. So, it creates an operational problem for leveraging Teams as a IoT device receiver or third party integrator.

Enable cloud as an API gateway

The idea was to enable cloud as an API gateway to accept requests, do advanced security checks, and decouple the Webex Teams security and context information from what is flashed onto the sensors to better manage the lifecycle. But extending this to webbooks was a natural evolution that seemed to have the most immediate impact to customers. When Demo’ing some of our cloud technologies (Intersight, Meraki), customers saw that notifications can go to webhook or email, and naturally inquired about their Webex Teams integration.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Exam Prep

By enabling the webhook capability, we immediately added support for all of our product sets that support webhooks to integrate with Webex Teams. And do so without requiring any change on either the product, or Webex Teams. We did want to have native “handlers” in the code to handle differences in webhook formatting between different products. For our project we created handlers for Cisco DNA Center and Meraki. We had started work on Thousand Eyes but didn’t have the lab instance able to send webhooks at the time we finished the project. The amount of effort to create and modify a handler is as simple as 20 minutes worth of effort ensuring that the JSON fields that you care about, are included in what is sent to Teams.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Guides, Cisco Exam Prep

The code is available on Github

Of note, while the code should have been very consistent between solutions, there is a difference in how Google integrates their API with their cloud functions compared to AWS. The API gateway on GCP has been out for a while, but right now integration of the API gateway on Google for cloud functions is in Beta and does require a bit more lift to setup. I expect this will normalize as it is brought to market. I also want to caveat that by noting I was seeking a functional product, closer integration with GCP teams probably would have helped with how I managed some error handling in Cloud Functions to make it integrate with API GW.

Wednesday 25 November 2020

Retail network segmentation landscape

Cisco Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Guides, Cisco Exam Prep

For as long as I can remember, retailers have recognized the importance of segmentation. The perils of mixing transactional data with other types of network traffic are significant. Yet, many retailers have found that a lack of attention in this area results in the compromise of transactional or Personally Identifiable Information (PII).

The challenge becomes exponentially more complex as the use of technology expands:

The long-predicted explosion of Internet of Things (IoT) devices is finally here. As many businesses respond to unpredictable business circumstances, it has become increasingly important that they have near real-time operational data on their stores and distribution centers. What is the current occupancy of my store? Are my chillers, freezers and hot tables working properly? Where are my associates and customers? What is my current inventory-on-hand (and what’s on the inbound truck, and when will it be here)? These questions can all be answered using IoT sensors. It is worth noting though that IoT sensors are either limited, or single-function devices, and therefore are not always able to defend themselves. If left unprotected, these devices can present a tempting attack surface for threat actors.

Point of Sale may not always be a static location. We are seeing more retailers shun the traditional fixed point of sale and adopt mobile devices. In some cases the POS may still be at a lane or cash wrap, but it may also be used for line busting, curbside pickup, home delivery, and for omni-channel returns. These additional use cases shift the emphasis from dedicated payment terminals that communicate directly with a payment processor, to multifunction devices sitting on the wireless network.

Guest wireless is now table stakes – customers expect to be able to send and receive text and email, access their shopping lists, or showroom their impending purchase to ensure they are getting the best price. A robust wireless network will not only be an expectation going forward, but a necessity to support associate efficiency and customer needs. With the advent of 5G networks, any communication that happens in the store via a mobile device needs to happen over the store wireless network, because 5G signals are unlikely to penetrate the structure of the building. Voice and data will cease when customers enter the store, unless the device can seamlessly roam onto the store network. That network will need the resilience and capacity to handle that traffic. Customers who cannot continue their conversations or access their data while in the store are likely to “vote with their feet” and shop elsewhere. In much the same way as guests now judge hotels by how fast and reliable the internet service is in their rooms, connectivity will be paramount for consumers and guests alike.

Cisco Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Guides, Cisco Exam Prep

The inextricable move to the cloud has accelerated recently for multiple reasons – a need to

◉ reduce the physical IT footprint in the store
◉ stand up and configure new or pop-up stores quickly
◉ capitalize on the elastic capacity that cloud processing provides for busy periods
◉ leverage Software as a Service offerings for business systems such as supply chain and customer relationship management.

This shift to public, private and hybrid cloud can present new complexities and create a reliance on external parties, resulting in limited visibility and management to the retailer.

Cisco Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Guides, Cisco Exam Prep

Many systems that are considered non-essential to the core retail mission (such as mechanical maintenance and physical security) are increasingly being outsourced. These moves result in third-party managed (or unmanaged) devices and sensors residing on the store or distribution center network.

These changes in the day-to-day operations of retailers can significantly increase the attack surface, and consequently the risk profile, for the retailer if not appropriately mitigated. The key is having a well-planned and executed segmentation and access control policy to ensure that devices and users can only access the systems and data appropriate for their role. Traditionally, this has been a somewhat manual process, which may be perfectly feasible for smaller organizations, but much more complex for larger retailers.

Tuesday 24 November 2020

Going Multicloud? Can you relate to one of these six use cases?

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

In 1996, Cisco and HCL began working together by setting up an offshore development center in India. Over the past 24 years, we have strategically joined forces to bring to market a broad portfolio of products and services that we deliver to more than 100 shared customers, encompassing data center, networks, collaboration, IoT, security, and application monitoring. We continue to invest in resource development through training, centers of excellence, labs, joint go-to-market activities, and collaborating as 360-degree partners developing Cisco products.

One of our joint development efforts created HCL VelocITy: A multicloud framework powered by Cisco. VelocITy goes beyond the world of software-defined solutions to offer a reusable, repeatable, reference architecture delivered with a consumable, flexible, commercial construct. This framework leverages components from Cisco and other ecosystem partners, along with HCL’s position as a market leader in data center outsourcing and hybrid infrastructure managed services.

Many enterprises now operate multiple cloud environments, deploying a blend of private on-premises and public cloud infrastructures that best meet their application and business requirements. Executing a successful multicloud strategy can be extremely challenging, however.

A need to make multicloud easier

As enterprises start planning their multicloud environments, they need to answer a number of questions that can be crucial for achieving their digitization goals, such as:

◉ Do they have the required and adequately skilled resources in-house for the new technology landscape?

◉ Do they have a standard reference architecture that will apply across their on-premises and cloud environments?

◉ How will they secure the entire environment?

◉ What will be the cost impact of migrating to a multicloud environment? Do they have visibility into the potential cost differences when migrating workloads to the cloud?

In addition, once they’ve made key planning decisions, the process of execution can become a real nightmare. A customer with its own IT team may find it extremely difficult migrating to a multicloud environment due to the high level of complexity.

To help customers respond to these challenges, HCL built VelocITy. As you’ll see, it offers substantial, measurable benefits.

Figure 1  The VelocITy multicloud architecture

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

A pre-integrated, certified, multicloud reference architecture

The HCL VelocITy framework, as shown in Figure 1, provides a pre-integrated, certified reference architecture with pre-engineered components, incorporating the people, processes, and technology infrastructure required to provide end-to-end service delivery for multicloud deployments. Simply put, it removes the pain a customer can experience when migrating to a multicloud environment. They don’t need to choose different products from different vendors and then attempt to integrate all into an optimal and secure solution that meets their specific use case needs.

The top six highly sought after use cases

Leveraging its experience helping many Fortune 500 customers design and implement their data center migration strategies, HCL has identified its top six validated use cases for multicloud. Aligned with major industry trends in deploying multicloud environments and summarized in Figure 2, these represent use cases that can be implemented with HCL VelocITy. These six use cases also expand to over 30 detailed use cases for day 0, day 1, and day 2.

Figure 2  Six validated multicloud use cases

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Prep, Cisco Career

Built on Cisco technology

In developing the VelocITy stack, HCL evaluated many potential technology partners. A clear success criterion was the potential partner’s ability to deliver an end-to-end stack with flexible consumption model options, along with having a high level of ecosystem partner integrations and single-call, day 2 break/fix support.

Cisco met these requirements, leveraging its in-depth hardware and software product portfolio. In addition, many of Cisco’s products are now offered through a flexible consumable model, leveraging the Cisco Capital®Open Pay® solution. A few of the Cisco products incorporated into the VelocITy stack include Cisco UCS®, HyperFlex™, Cisco ACI®, Cisco Intersight™, Cisco Workload Optimization Manager, AppDynamics®, Cisco Container Platform, Cisco CloudCenter™, and Cisco Tetration Analytics™.

Adding to this list, Cisco has more than 65 integrated and certified ecosystem partner offerings available to fill out the VelocITy solution.

Migrate to a hybrid, multicloud environment with confidence

Cisco’s market-leading technology, available through a modern OpEx-based business model, in combination with HCL’s vast experience in multicloud deployments, brings to market a truly differentiated offering.

The numbers speak for themselves. For example, by deploying HCL VelocITy, both a large hospitality chain and a European telco reduced their TCO by 40 percent, while seeing a 50-60 percent improvement in IT automation.

If you’re migrating to a multicloud environment to meet your specific use case needs, explore HCL VelocITy powered by Cisco.

Monday 23 November 2020

A Look to 2021 with Cisco Meraki

Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

As we look back on 2020, the pandemic has proven to be a clear digitization accelerator—especially in areas critical to COVID-19 response. Companies within the financial sector are going beyond the lobby and into the cloud.

Some of the key benefits of embracing a cloud-managed IT solution include a decrease in time to market thanks to automation and zero-touch provisioning, the ability to simplify visibility and troubleshoot while helping IT teams get ahead of issues, and the ability to focus extra time and budget on resources and business-critical projects.

Managing safe migration to the cloud has never been more important. At Cisco Meraki, our cloud-based platform enables agility, scale, and simplification for financial institutions big and small. As we look ahead to 2021, a few key areas of focus stand out.

Meet the secure branch of the future

Providing reliable, secure connectivity while turning data into intelligent insights about how your branch infrastructure is operating—and how it can operate better—is critical. This includes improving your customer experience and engaging with them in new ways from the moment they enter the branch. For example, consider personalizing customer engagements through digital and in-branch resources, then leverage collected insights to inform and improve customer experience.   

Manage video analytics intelligently

IoT cameras and sensors combined with Meraki Insight are a powerful tool to effectively manage security, while ensuring a safe customer experience. They allow you to: 

- Manage video analytics intelligently

- Maintain social distancing protocols

- Eliminate outdated hardware

Cisco Prep, Cisco Certification, Cisco Guides, Cisco Learning

Cloud-based team support

Today’s new normal has required businesses to rethink how to help their employees collaborate safely while working from remote locations. These cloud-based solutions are helping companies support their off-site workforce. Some examples include safe remote access for payments, insurance claims, and loan approvals—all while maintaining policy compliance. Teams are able to connect to a secure network from any location for mission-critical and sensitive data.

All in all, the shift to cloud-managed IT solutions has proven to be beneficial for companies within the financial sector. As we look ahead to 2021, security and analytics will continue to be key considerations for change.

Friday 20 November 2020

Fast Track to Success in Cisco 300-420 ENSLD Certification

Cisco ENSLD Exam Description:

This exam certifies a candidate's knowledge of enterprise design including advanced addressing and routing solutions, advanced enterprise campus networks, WAN, security services, network services, and SDA. The course, Designing Cisco Enterprise Networks, helps candidates to prepare for this exam.

Cisco 300-420 Exam Overview:

  • Exam Name: Designing Cisco Enterprise Networks
  • Exam Number: 300-420 ENSLD
  • Exam Price: $300 USD
  • Duration: 90 minutes
  • Number of Questions: 55-65
  • Passing Score: Variable (750-850 / 1000 Approx.)
Also Read:-

Thursday 19 November 2020

Introduction to Programmability – Part 3

This is Part 3 of the “Introduction to Programmability” series. If you haven’t already done so, I strongly urge you to check out Parts 1 & 2 before you proceed. You will be missing on a lot of interesting information if you don’t.

Part 1 of this series defined and explained the terms Network Management, Automation, Orchestration, Data Modeling, Programmability and API. It also introduced the Programmability Stack and explained how an application at the top layer of the stack, wishing to consume an API exposed by a device at the bottom of the stack, does that.

Part 2 then introduced and contrasted two types of APIs: RPC-based APIs and RESTful APIs. It also introduced the NETCONF protocol, which is an RPC-based protocol/API, along with the (only) encoding that it supports and uses: XML.

Note: You will notice that I use the words API and protocol interchangeably. As mentioned in Part 2, a NETCONF API means that the client and server will use the NETCONF protocol to communicate together. The same applies to a RESTCONF API. Therefore, both NETCONF and RESTCONF may be labelled as protocols, or APIs.

In this part of the series, you will see the other type of APIs in action, namely RESTful APIs. You will see first how vanilla HTTP works. Then we will build on what we introduced in Part 2 and dig just a little deeper into REST. Then explain the relationship between HTTP, REST and RESTful APIs. I like to classify RESTful API into two types: Industry-standard and native (aka vendor/platform-specific). We will briefly cover RESTCONF, an industry-standard API as well as NX-API REST, a native API exposed by programmable Nexus switches. Finally, you will see how to consume a RESTful API using Python.

On a side note, you may be wondering how so much information will be covered in one blog post. Well, the challenge has always existed between depth and breadth with respect to topic coverage. In this series, I attempt to familiarize you with as many topics as possible and answer as many common questions related to programmability as feasible. The intention is not for you to come out of this 15-minute read an expert, but to be able to identify concepts and technologies that thus far have sounded foreign to you as a network engineer.


As a network engineer, before I got into network programmability many many years ago, I knew that HTTP was the protocol on which the Internet was based. I knew, as required by my work, that HTTP was a client-server protocol that used TCP port 80 (and 443 in the case of HTTPS). I also knew it had something to do with the URIs I entered into my web browser to navigate to a web page. That was it.

But what really is HTTP ?

HTTP stands for HyperText Transfer Protocol. Hypertext is text that contains one or more hyperlinks. A hyperlink is a reference or pointer to data known as the resource or the target of the hyperlink. The text of the hyperlink itself is called the anchor text. That target may be a number of things such as a webpage on the Internet, a section in a Word document or a location on your local storage system.

A little piece of trivia: In 1965 an American scientist called Ted Nelson coined the term hypertext to describe non-linear text. Non-linear refers to a lack of hierarchy for the links between the documents. Then in 1989, Sir Timothy Berners-Lee, wrote the first web client and server implementation that utilized hypertext. That protocol would be used to fetch the data that a hyperlink pointed to and eventually became HTTP. Today, Sir Timothy is best known as the inventor of the World Wide Web.

Therefore, pressing on the anchor text will send a request to the server to fetch the resource at /developer/intro-to-programmability-2, which is the HTML content of the webpage at that URI. This content will be parsed and rendered by the web browser and displayed in the browser window for you to view.

So an HTTP workflow involves a client establishing a TCP connection to an HTTP server. This connection is done over port 80 by default, but the port is usually configurable. Once the TCP session is up, the client sends a number of HTTP request messages. The server responds to each request message with a response message. Once the HTTP transactions are completed, the TCP session is torn down by either of the endpoints.

A client HTTP request message includes a Universal Resource Identifier (URI) that is a hierarchical address composed of segments separated by a slash (/). This URI identifies the resource on the server that the client is targeting with this request. In the realm of network programmability, the resource identified by a URI may be the interface configuration on a switch or the neighbors in the OSPF neighbor table on a router.

The client request message will also include an HTTP method that indicates what the client wishes to do with the resource targeted by the URI in the same request. An example of a method is GET which is used to retrieve the resource identified by target URI. For example, a GET request to the URI identifying the interface configuration of interface Loopback 100 will return the configuration on that interface. A POST method, on the other hand, is used to edit the data at the target URI. You would use the POST method to edit the configuration of interface Loopback 100.

In addition to the URI and method, an HTTP request includes a number of header fields whose values hold the metadata for the request. Header fields are used to attach information related to the HTTP connection, server, client, message and the data in the message body.

Figure 1 shows the anatomy of an HTTP request. At the top of the request is the start line composed of the HTTP method, URI and HTTP version. Then comes the headers section. Each header is a key-value pair, separated by a colon. Each header is on a separate line. In more technical terms, each header is delimited by a Carriage Return Line Feed (CRLF). The headers section is separated from the message body with an empty line (two CRLFs). In the figure, the message body is empty, since this is a GET request: the client is requesting a resource from the server, in this case, the operational data of interface GigabitEthernet2, so there is no data to send in the request, and hence, no message body.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 1 – Anatomy of an HTTP request

When a server receives a request from the client, it processes the request, and sends back an HTTP response message. An HTTP response message will include a status code that indicates the result of processing the request, and a status text that describes the code. For example, the status code and text 200 OK indicate that the request was successfully processed, while the (notorious) code and text 404 Not Found indicate that the resource targeted by the client was not found on the server.

The format of a response message is very similar to a request, except that the start line is composed of the HTTP version, followed by a status code and text. Also, the body is usually not empty. Figure 2 shows the anatomy of an HTTP response message.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 2 – Anatomy of an HTTP response message

Studying and hence understanding and using HTTP revolves around the following points:

– Target URI: You need to know the correct syntax rules of a URI, such as which characters are allowed and which are not, and what each segment of the URI should contain. URI segments are called scheme, authority, path, query and fragment. You also need to understand the correct semantics rules of a URI, that is, to be able to construct URIs to correctly target the resources that you want to operate on. URI syntax rules are universal. Semantics rules, on the other hand, depend on which protocol you are working with. In other words, a syntactically correct URI that targets a specific resource using RESTCONF will not be the same URI to target that same resource on that same device using another RESTful API, such as NX-API REST.

– Request method: You need to know the different request methods and understand the effect that each would have on a resource. GET fetches a resource (such as a web page or interface configuration) while POST edits a resource (such as add a record to a database, or change the IP address on a router interface). Commonly used methods are GET, HEAD, OPTIONS, POST, PATCH PUT and DELETE. The first three are used to retrieve a resource while the other four are used to edit, replace or delete a resource.

– Server status codes: Status codes returned by servers in their HTTP response messages are classified into the following sub-categories:

◉ 1xx: Informational messages to the client. The purpose of these response messages is to convey the current status of the connection or transaction in an interim response, before the final response is sent to the client.

◉ 2xx: The request was successfully processed by the server. Most common codes in this category are 200 (OK) and 201 (Created). The latter is used when a new resource is created by the server as a result of the request sent from the client.

◉ 3xx: Used to redirect the client, such as when the client requests a web page and the server attempts to redirect the client to a different web page (common use-case is when a web page owner changes the location of the web page and wishes to redirect clients attempting to browse to the old URI).

◉ 4xx: Signals that there is something wrong with the request received from the client. Common codes in this category are 401 (Bad Request), 403 (Forbidden), and 403 (Not Found).

◉ 5xx: Signals an error on the server side. Common status codes in this category are 500 (Internal Error), 503 (Service Unavailable), and 504 (Gateway Timeout).

– Message body: Understanding how to construct the message body. If model-driven programmability is used, the message body will depend on two things:

◉ Syntax rules governed by the encoding used: a message encoded in XML will have different syntax rules than a message encoded in JSON, even if both are intended to accomplish the same task

◉ Semantics rules governed by the data model used: You may target the same resource and accomplish the same end result using two (or more) different message bodies, each depending on the hierarchy of elements defined by the referenced data model.

– Headers: Understanding which headers to include in your request message is very important to get the results you want. For example, in Figure 1 the first header right after the start line Accept: application/yang-data+json is the client’s way of telling the server (the DevNet IOS-XE router/sandbox in this case) that it will only accept the requested resource (the interface operational data) encoded in JSON. If this field was application/yang-data+xml, the server’s response body would have been encoded in XML instead. Header values in response messages also provide valuable information related to the server on which the resource resides (called origin server), any cache servers in the path, the resources returned, as well as information that will assist to troubleshoot error conditions in case the transaction did not go as intended.

HTTP started off at version 0.9, then version 1.0. The current version is 1.1 and is referred as HTTP/1.1. Most of HTTP/1.1 is defined in the six RFCs 7230 – 7235, each RFC covering a different functional part of the protocol.

HTTP/2 is also in use today, however, RFC 7540 states that “This specification [HTTP/2.0] is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP’s existing semantics remain unchanged.” This means that HTTP/2.0 does not change the message format of HTTP/1.1. It simply introduces some enhancements to HTTP/1.1. Therefore, everything you have read so far in this blog post remains valid for HTTP/2.

HTTP/2 is based on a protocol called SPDY developed Google. HTTP/2 introduces a new framing format that breaks up an HTTP message into a stream of frames and allows multiplexing frames from different streams on the same TCP connection. This, along with several other enhancements and features promise a far superior performance over HTTP/1.1. The gRPC protocol is based on HTTP/2.

It may come as a surprise to some, but HTTP/3 is also under active development, however, it is not based on TCP altogether. HTTP/3 is based on another protocol called QUIC initially developed by, as you may have guessed, Google, then later adopted by the IETF and described in draft-ietf-quic-transport. HTTP/3 takes performance to whole new level. However, HTTP/3 is still in its infancy.

HTTP uses the Authorization, WWW-Authenticate, Proxy-Authorization and Proxy-Authenticate headers for authentication. However, in order to provide data confidentiality and integrity, HTTP is coupled with Transport Layer Security (TLS 1.3). HTTP over TLS is called HTTPS for HTTP Secure.
But what does HTTP have to do with REST and RESTful APIs ?

As you have read in Part 2 of this series, REST is a framework for developing APIs. It lays down 6 constraints, 5 mandatory and 1 optional. As a reminder, here are the constraints:

◉ Client-Server based
◉ Stateless
◉ Cacheable
◉ Have a uniform interface
◉ Based on a layered system
◉ Utilize code-on-demand (Optional)

In a nutshell, HTTP is the protocol that is leveraged to implement an API that complies with these constraints. But again, what does all this mean?

As you already know by now, HTTP is a client-server protocol. That’s the first REST constraint.

HTTP is a stateless protocol, as required by the second constraint, because when a server sends back a response to a client request, the transaction is completed and no state information pertaining to this specific transaction is maintained on the server. Any single client request contains all the information required to fully understand and process this request, independent of any previous requests.

Ever heard of cache servers ? An HTTP resource may be cached at intermediate cache servers along the path between the client and server if labeled as cacheable by the sending endpoint. Moreover, HTTP defines a number of header fields to support this functionality. Therefore, the third REST constraint is satisfied.

HTTP actually does not deal with resources, but rather with representations of these resources. Data retrieved from a server may be encoded in JSON or XML. Each of these is a different representation of the resource. A client may send a POST request message to edit the configuration of an interface on a router, and in the process, communicates a desired state for a resource, which, in this case, is the interface configuration. Therefore, a representation is used to express a past, current or desired state of a resource in a format that can be transported by HTTP, such as JSON, XML or YAML. This is actually where the name REpresentational State Transfer (REST) comes from.

The concept of representations takes us directly to the fourth constraint: regardless of the type of resource or the characteristics of the resource representation expressed in a message, HTTP provides the same interface to all resources. HTTP adheres to the fourth constraint by providing a uniform interface for clients to address resources on servers.

The fifth constraint dictates that a system leveraging RESTful APIs should be able to support a layered architecture. A layered architecture segregates the functional components into a number of hierarchical layers, where each layer is only aware of the existence of the adjacent layers and communicates only with those adjacent layers. For example, a client may interact with a proxy server, not the actual HTTP server, while not being aware of this fact. On the other end of the connection, a server processing and responding to client requests in the frontend may rely on an authentication server to authenticate those clients.

The final constraint, which is an optional constraint, is support for Code on Demand (CoD). CoD is the capability of downloading software from the server to the client, to be executed by the client, such as Java applets or JavaScript code downloaded from a web site and run by the client web browser.

Therefore, by providing appropriate, REST-compliant transport to a protocol in order to expose an API to the external world, HTTP makes that protocol or API RESTful.

Are you still wondering what is HTTP, REST and RESTful APIs ?

JSON – JavaScript Object Notation

Similar to XML, JSON is used to encode the data in the body of HTTP messages. As a matter of fact, the supported encoding is decided by the protocol used, not by HTTP. NETCONF only supports XML while RESTCONF supports both XML and JSON. Other APIs may only support JSON. Since XML was covered in Part 2 of this series, we will cover JSON in this part.

Unlike XML, that was developed to be primarily machine-readable, JSON was developed to be a human-friendly form of encoding. JSON is standardized in RFC 8259. JSON is much simpler than XML and is based on four simple rules:

1. Represent your objects as key-value pairs where the key and value are separated with a colon
2. Enclose each object in curly braces
3. Enclose arrays in square brackets (more on arrays in minute)
4. Separate objects or array values with commas

Let’s start with a very simple example – an IP address:

{“ip”: “”}

The object here is enclosed in curly braces as per rule #2. The key (ip) and value ( are separated by a colon as per rule #1. Keep in mind that the key must be a string and therefore will always be enclosed in double quotes. The value is also a string in the example since it is enclosed in double quotes. Generally, a value may be any of the following types:

◉ String: such as “Khaled” – always enclosed in double quotes
◉ Number: A positive, negative, fraction, or exponential number, not enclosed in quotes
◉ Another JSON object: shown in the next example
◉ Array: An ordered list of values (of any type) such as [“Khaled”,“Mohammed”,“Abuelenain”]
◉ Boolean: true or false
◉ null: single value of null

A very interesting visual description of value types is given here:

Now assume that there is an object named address that has two child JSON objects, ip and netmask. That will be represented as follows:

  "address": {
    "ip": "",
    "netmask": ""

Notice that the objects ip and netmask are separated by a comma as per rule #4.
What if the address object needs to hold more than one IP address (primary and secondary) ? Then it can be represented as follows:

  "address": [
      "ip": "",
      "netmask": ""
      "ip": "",
      "netmask": ""

In this example, address is a JSON object whose value is an array, therefore, everything after the colon following the key is enclosed in square brackets. This array has two values, each an JSON object in itself. So this is an array of objects. Notice that the in addition to the comma separating the ip and netmask objects inside each object, there is also a comma after the closing curly brace around the middle of the example. This comma separates the two values of the array.
And that’s about all you need to know about JSON !

Standards-based vs. Native RESTful APIs: RESTCONF & NX-API REST

As you have seen in the previous section, any RESTful protocol/API employing HTTP at the Transport Layer (of the programmability stack – NOT the OSI 7-layer model) will need to define three things:

1. What encoding(s) does it supports (XML, JSON, YAML, others) ?

2. How to construct a URI to target a specific resource ? A URI is a hierarchical way of addressing resources, and in its absolute form, a URI will uniquely identify a specific resource. Each protocol will define a different URI hierarchy to achieve that.

3. Which data models are supported and, combined with point number 1 above, will decide what the message body will look like.

RESTCONF is a standards-based RESTful API defined in RFC 8040. RESTCONF is compatible with NETCONF and is sometimes referred to as the RESTful version of NETCONF. This means that they can both coexist on the same platform without conflict. Although RESTCONF supports a single “conceptual” datastore, there are a set of rules that govern the interaction of RESTCONF with NETCONF with respect to datastores and configuration. While NETCONF support XML only, RESTCONF supports both XML and JSON. RESTCONF supports the same YANG data models supported by NETCONF. Therefore, a message body in RESTCONF will be model-based just as you have seen with NETCONF, with a few caveats. However, RESTCONF only implements a subset of the functions of NETCONF.

The architectural components of RESTCONF can be summarized by the 4-layer model in Figure 3. The 4 layers are Transport, Messages, Operations and Content. Just like NETCONF.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 3 – The RESTONF architectural 4-Layer model

Now to the RESTful part of RESTCONF. RESTCONF supports all HTTP methods discussed so far. The key to understanding RESTCONF then is to understand how to construct a URI to target a resource. While it is out of scope of this (very) brief introductory post to get into the fine details of the protocol, it is important to get at least a glimpse of RESTCONF URI construction, as it is the single most important factor differentiating the protocol right after its compatibility with NETCONF. The resource hierarchy for RESTCONF is illustrated in Figure 4.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 4 – Resource hierarchy in RESTCONF

The branch of this hierarchy that relates to configuration management and datastores is
API -> Datastore -> Data. A URI in RESTCONF has the general format of


Without getting into too much details, the Cisco implementation of RESTCONF uses the string restconf as the value of the API Resource and the string data as the value of the Datastore Resource. So on the DevNet IOS-XE Sandbox, for example, all RESTCONF URIs will start with In the next section you see how to configure a loopback address using a RESTCONF URI and a YANG data model.

Now on the other side of the spectrum are native RESTful APIs. Native RESTful APIs are vendor-specific and are usually platform specific as well. On example of a RESTful API that is widely used by the programmability community is NX-API REST that is exposed by programmable Nexus switches. NX-API REST is a RESTful API that uses HTTP request and response messages composed of methods, URIs, data models and status codes, like all other RESTful APIs. However, this API uses a Cisco-specific data model called the Management Information Tree (MIT). The MIT is composed of Managed Objects (MO). Each MO represents a feature or element on the switch that can be uniquely targeted by a URI.

When the switch receives an HTTP request to an NX-API REST URI, an internal Data Management Enginer (DME) running on the switch validates the URI, substitutes missing values with default values, where applicable, and, if the client is authorized to perform the method stated in the client request, the MIT is updated accordingly.

Similar to RESTCONF, NX-API REST supports payload bodies in both XML and JSON.

RESTful APIs and Python

The requests package has been developed to abstract the implementation of an HTTP client using Python. The Python Software Foundation recommends using the requests package whenever a “higher-level” HTTP client-interface is needed (

The requests package is not part of the standard Python library, therefore it has to be manually installed using pip. Example 1 shows the installation of requests using pip3.7.

Example 1 Installing the requests package using pip

[kabuelenain@server1 ~]$ sudo pip3.7 install requests

Collecting requests

  Using cached

Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests) (2.7)

Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests) (2018.10.15)

Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests) (3.0.4)

Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests) (1.24.1)

Installing collected packages: requests

Successfully installed requests-2.22.0

[kabuelenain@server1 ~]$

After the requests package is installed, you are ready to import it into your code. Using the requests package is primarily based on creating a response object, and then extracting the required information from that response object.

The simplified syntax for creating a response object is:

Response_Object = requests.method(uri, headers=headers, data=message_body)

To create an object for a GET request, in place of requests.method, use requests.get. For a POST request, use, and so forth.

Replace the uri parameter in the syntax with the target URI. The headers parameter will hold the headers and the data parameter will hold the request message body. The uri should be a string, the headers parameter should be a dictionary and the data parameter may be provided as a dictionary, string, or list. The parameter data=payload may be replaced by json=payload, in which case the payload will be encoded into JSON automatically.

Some of the information that you can extract from the Response Object is:

◉ Response_Object.content: The response message body (data) from the server as a byte object (not decoded).
◉ Response_Object.text: The decoded response message body from the server. The encoding is chosen automatically based on an “educated guess”.
◉ Response_Object.encoding: The encoding used to convert Response_Object.content to ◉ Response_Object.text. You can manually set this to a specific encoding of your choice.
◉ Response_Object.json(): The decoded response message body (data) from the server encoded in json, if the response resembles a json object (otherwise an error is returned).
◉ Response_Object.url: The full (absolute) target uri used in the request.
◉ Response_Object.status_code: The response status code.
◉ Response_Object.request.headers: The request headers.
◉ Response_Object.headers: The response headers.

In Example 2, a POST request is sent to the DevNet IOS-XE Sandbox to configure interface Loopback123. Looking at the URI used, you can guess that the Python script is consuming the RESTCONF API exposed by the router. Also, from the URI as well as the message body, it is evident that the YANG model used in this example is ietf-interfaces.yang (available at

Example 2 POST request using the requests package to configure interface Loopback123

#!/usr/bin/env python3

import requests

url = ''

headers = {'Content-Type': 'application/yang-data+json',
    'Authorization': 'Basic ZGV2ZWxvcGVyOkMxc2NvMTIzNDU='}
payload = '''
  "interface": {
    "name": "Loopback123",
    "description": "Creating a Loopback interface using Python",
    "type": "iana-if-type:softwareLoopback",
    "enabled": true,
    "ietf-ip:ipv4": {
      "address": {
        "ip": "",
        "netmask": ""

Response_Object =,headers=headers,data=payload,verify=False)

print('The server response (data) as a byte object: ','\n\n',Response_Object.content,'\n')

print('The decoded server response (data) from the server: ','\n\n',Response_Object.text,'\n')

print('The encoding used to convert Response_Object.content to Response_Object.text: ','\n\n', Response_Object.encoding,'\n')

print('The full (absolute) URI used in the request: ','\n\n',Response_Object.url,'\n')

print('The response status code: ','\n\n',Response_Object.status_code,'\n')

print('The request headers: ','\n\n',Response_Object.request.headers,'\n')

print('The response headers :','\n\n',Response_Object.headers,'\n')

Example 3 shows the result from running the previous script.

Example 7-3 Running the script and creating interface Loopback123

[kabuelenain@server1 Python-Scripts]$ ./

/usr/lib/python3.6/site-packages/urllib3/ InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See:


The server response (data) as a byte object: 


The decoded server response (data) from the server: 

The encoding used to convert Response_Object.content to Response_Object.text: 


The full (absolute) URI used in the request:

The response status code: 


The request headers: 

 {'User-Agent': 'python-requests/2.20.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/yang-data+json', 'Authorization': 'Basic ZGV2ZWxvcGVyOkMxc2NvMTIzNDU=', 'Content-Length': '364'}

The response headers :

 {'Server': 'nginx/1.13.12', 'Date': 'Fri, 13 Nov 2020 11:00:28 GMT', 'Content-Type': 'text/html', 'Content-Length': '0', 'Connection': 'keep-alive', 'Location': '', 'Last-Modified': 'Fri, 13 Nov 2020 11:00:14 GMT', 'Cache-Control': 'private, no-cache, must-revalidate, proxy-revalidate', 'Etag': '"1605-265214-914179"', 'Pragma': 'no-cache'}

[kabuelenain@server1 Python-Scripts]$

As you can see, the status code returned in the server response message is 201 (Created) which means that the Loopback interface was successfully created. You may have noticed that the message body (actual data in the message) is empty, since there is nothing to return back to the client. However, the Location header in the response headers (highlighted in the example) returns a new URI that points to the newly created resource.