Thursday 19 November 2020

Introduction to Programmability – Part 3

This is Part 3 of the “Introduction to Programmability” series. If you haven’t already done so, I strongly urge you to check out Parts 1 & 2 before you proceed. You will be missing on a lot of interesting information if you don’t.

Part 1 of this series defined and explained the terms Network Management, Automation, Orchestration, Data Modeling, Programmability and API. It also introduced the Programmability Stack and explained how an application at the top layer of the stack, wishing to consume an API exposed by a device at the bottom of the stack, does that.

Part 2 then introduced and contrasted two types of APIs: RPC-based APIs and RESTful APIs. It also introduced the NETCONF protocol, which is an RPC-based protocol/API, along with the (only) encoding that it supports and uses: XML.

Note: You will notice that I use the words API and protocol interchangeably. As mentioned in Part 2, a NETCONF API means that the client and server will use the NETCONF protocol to communicate together. The same applies to a RESTCONF API. Therefore, both NETCONF and RESTCONF may be labelled as protocols, or APIs.

In this part of the series, you will see the other type of APIs in action, namely RESTful APIs. You will see first how vanilla HTTP works. Then we will build on what we introduced in Part 2 and dig just a little deeper into REST. Then explain the relationship between HTTP, REST and RESTful APIs. I like to classify RESTful API into two types: Industry-standard and native (aka vendor/platform-specific). We will briefly cover RESTCONF, an industry-standard API as well as NX-API REST, a native API exposed by programmable Nexus switches. Finally, you will see how to consume a RESTful API using Python.

On a side note, you may be wondering how so much information will be covered in one blog post. Well, the challenge has always existed between depth and breadth with respect to topic coverage. In this series, I attempt to familiarize you with as many topics as possible and answer as many common questions related to programmability as feasible. The intention is not for you to come out of this 15-minute read an expert, but to be able to identify concepts and technologies that thus far have sounded foreign to you as a network engineer.

HTTP

As a network engineer, before I got into network programmability many many years ago, I knew that HTTP was the protocol on which the Internet was based. I knew, as required by my work, that HTTP was a client-server protocol that used TCP port 80 (and 443 in the case of HTTPS). I also knew it had something to do with the URIs I entered into my web browser to navigate to a web page. That was it.

But what really is HTTP ?

HTTP stands for HyperText Transfer Protocol. Hypertext is text that contains one or more hyperlinks. A hyperlink is a reference or pointer to data known as the resource or the target of the hyperlink. The text of the hyperlink itself is called the anchor text. That target may be a number of things such as a webpage on the Internet, a section in a Word document or a location on your local storage system.

A little piece of trivia: In 1965 an American scientist called Ted Nelson coined the term hypertext to describe non-linear text. Non-linear refers to a lack of hierarchy for the links between the documents. Then in 1989, Sir Timothy Berners-Lee, wrote the first web client and server implementation that utilized hypertext. That protocol would be used to fetch the data that a hyperlink pointed to and eventually became HTTP. Today, Sir Timothy is best known as the inventor of the World Wide Web.

Therefore, pressing on the anchor text https://blogs.cisco.com/developer/intro-to-programmability-2 will send a request to the blogs.cisco.com server to fetch the resource at /developer/intro-to-programmability-2, which is the HTML content of the webpage at that URI. This content will be parsed and rendered by the web browser and displayed in the browser window for you to view.

So an HTTP workflow involves a client establishing a TCP connection to an HTTP server. This connection is done over port 80 by default, but the port is usually configurable. Once the TCP session is up, the client sends a number of HTTP request messages. The server responds to each request message with a response message. Once the HTTP transactions are completed, the TCP session is torn down by either of the endpoints.

A client HTTP request message includes a Universal Resource Identifier (URI) that is a hierarchical address composed of segments separated by a slash (/). This URI identifies the resource on the server that the client is targeting with this request. In the realm of network programmability, the resource identified by a URI may be the interface configuration on a switch or the neighbors in the OSPF neighbor table on a router.

The client request message will also include an HTTP method that indicates what the client wishes to do with the resource targeted by the URI in the same request. An example of a method is GET which is used to retrieve the resource identified by target URI. For example, a GET request to the URI identifying the interface configuration of interface Loopback 100 will return the configuration on that interface. A POST method, on the other hand, is used to edit the data at the target URI. You would use the POST method to edit the configuration of interface Loopback 100.

In addition to the URI and method, an HTTP request includes a number of header fields whose values hold the metadata for the request. Header fields are used to attach information related to the HTTP connection, server, client, message and the data in the message body.

Figure 1 shows the anatomy of an HTTP request. At the top of the request is the start line composed of the HTTP method, URI and HTTP version. Then comes the headers section. Each header is a key-value pair, separated by a colon. Each header is on a separate line. In more technical terms, each header is delimited by a Carriage Return Line Feed (CRLF). The headers section is separated from the message body with an empty line (two CRLFs). In the figure, the message body is empty, since this is a GET request: the client is requesting a resource from the server, in this case, the operational data of interface GigabitEthernet2, so there is no data to send in the request, and hence, no message body.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 1 – Anatomy of an HTTP request

When a server receives a request from the client, it processes the request, and sends back an HTTP response message. An HTTP response message will include a status code that indicates the result of processing the request, and a status text that describes the code. For example, the status code and text 200 OK indicate that the request was successfully processed, while the (notorious) code and text 404 Not Found indicate that the resource targeted by the client was not found on the server.

The format of a response message is very similar to a request, except that the start line is composed of the HTTP version, followed by a status code and text. Also, the body is usually not empty. Figure 2 shows the anatomy of an HTTP response message.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 2 – Anatomy of an HTTP response message

Studying and hence understanding and using HTTP revolves around the following points:

– Target URI: You need to know the correct syntax rules of a URI, such as which characters are allowed and which are not, and what each segment of the URI should contain. URI segments are called scheme, authority, path, query and fragment. You also need to understand the correct semantics rules of a URI, that is, to be able to construct URIs to correctly target the resources that you want to operate on. URI syntax rules are universal. Semantics rules, on the other hand, depend on which protocol you are working with. In other words, a syntactically correct URI that targets a specific resource using RESTCONF will not be the same URI to target that same resource on that same device using another RESTful API, such as NX-API REST.

– Request method: You need to know the different request methods and understand the effect that each would have on a resource. GET fetches a resource (such as a web page or interface configuration) while POST edits a resource (such as add a record to a database, or change the IP address on a router interface). Commonly used methods are GET, HEAD, OPTIONS, POST, PATCH PUT and DELETE. The first three are used to retrieve a resource while the other four are used to edit, replace or delete a resource.

– Server status codes: Status codes returned by servers in their HTTP response messages are classified into the following sub-categories:

◉ 1xx: Informational messages to the client. The purpose of these response messages is to convey the current status of the connection or transaction in an interim response, before the final response is sent to the client.

◉ 2xx: The request was successfully processed by the server. Most common codes in this category are 200 (OK) and 201 (Created). The latter is used when a new resource is created by the server as a result of the request sent from the client.

◉ 3xx: Used to redirect the client, such as when the client requests a web page and the server attempts to redirect the client to a different web page (common use-case is when a web page owner changes the location of the web page and wishes to redirect clients attempting to browse to the old URI).

◉ 4xx: Signals that there is something wrong with the request received from the client. Common codes in this category are 401 (Bad Request), 403 (Forbidden), and 403 (Not Found).

◉ 5xx: Signals an error on the server side. Common status codes in this category are 500 (Internal Error), 503 (Service Unavailable), and 504 (Gateway Timeout).

– Message body: Understanding how to construct the message body. If model-driven programmability is used, the message body will depend on two things:

◉ Syntax rules governed by the encoding used: a message encoded in XML will have different syntax rules than a message encoded in JSON, even if both are intended to accomplish the same task

◉ Semantics rules governed by the data model used: You may target the same resource and accomplish the same end result using two (or more) different message bodies, each depending on the hierarchy of elements defined by the referenced data model.

– Headers: Understanding which headers to include in your request message is very important to get the results you want. For example, in Figure 1 the first header right after the start line Accept: application/yang-data+json is the client’s way of telling the server (the DevNet IOS-XE router/sandbox in this case) that it will only accept the requested resource (the interface operational data) encoded in JSON. If this field was application/yang-data+xml, the server’s response body would have been encoded in XML instead. Header values in response messages also provide valuable information related to the server on which the resource resides (called origin server), any cache servers in the path, the resources returned, as well as information that will assist to troubleshoot error conditions in case the transaction did not go as intended.

HTTP started off at version 0.9, then version 1.0. The current version is 1.1 and is referred as HTTP/1.1. Most of HTTP/1.1 is defined in the six RFCs 7230 – 7235, each RFC covering a different functional part of the protocol.

HTTP/2 is also in use today, however, RFC 7540 states that “This specification [HTTP/2.0] is an alternative to, but does not obsolete, the HTTP/1.1 message syntax. HTTP’s existing semantics remain unchanged.” This means that HTTP/2.0 does not change the message format of HTTP/1.1. It simply introduces some enhancements to HTTP/1.1. Therefore, everything you have read so far in this blog post remains valid for HTTP/2.

HTTP/2 is based on a protocol called SPDY developed Google. HTTP/2 introduces a new framing format that breaks up an HTTP message into a stream of frames and allows multiplexing frames from different streams on the same TCP connection. This, along with several other enhancements and features promise a far superior performance over HTTP/1.1. The gRPC protocol is based on HTTP/2.

It may come as a surprise to some, but HTTP/3 is also under active development, however, it is not based on TCP altogether. HTTP/3 is based on another protocol called QUIC initially developed by, as you may have guessed, Google, then later adopted by the IETF and described in draft-ietf-quic-transport. HTTP/3 takes performance to whole new level. However, HTTP/3 is still in its infancy.

HTTP uses the Authorization, WWW-Authenticate, Proxy-Authorization and Proxy-Authenticate headers for authentication. However, in order to provide data confidentiality and integrity, HTTP is coupled with Transport Layer Security (TLS 1.3). HTTP over TLS is called HTTPS for HTTP Secure.
But what does HTTP have to do with REST and RESTful APIs ?

As you have read in Part 2 of this series, REST is a framework for developing APIs. It lays down 6 constraints, 5 mandatory and 1 optional. As a reminder, here are the constraints:

◉ Client-Server based
◉ Stateless
◉ Cacheable
◉ Have a uniform interface
◉ Based on a layered system
◉ Utilize code-on-demand (Optional)

In a nutshell, HTTP is the protocol that is leveraged to implement an API that complies with these constraints. But again, what does all this mean?

As you already know by now, HTTP is a client-server protocol. That’s the first REST constraint.

HTTP is a stateless protocol, as required by the second constraint, because when a server sends back a response to a client request, the transaction is completed and no state information pertaining to this specific transaction is maintained on the server. Any single client request contains all the information required to fully understand and process this request, independent of any previous requests.

Ever heard of cache servers ? An HTTP resource may be cached at intermediate cache servers along the path between the client and server if labeled as cacheable by the sending endpoint. Moreover, HTTP defines a number of header fields to support this functionality. Therefore, the third REST constraint is satisfied.

HTTP actually does not deal with resources, but rather with representations of these resources. Data retrieved from a server may be encoded in JSON or XML. Each of these is a different representation of the resource. A client may send a POST request message to edit the configuration of an interface on a router, and in the process, communicates a desired state for a resource, which, in this case, is the interface configuration. Therefore, a representation is used to express a past, current or desired state of a resource in a format that can be transported by HTTP, such as JSON, XML or YAML. This is actually where the name REpresentational State Transfer (REST) comes from.

The concept of representations takes us directly to the fourth constraint: regardless of the type of resource or the characteristics of the resource representation expressed in a message, HTTP provides the same interface to all resources. HTTP adheres to the fourth constraint by providing a uniform interface for clients to address resources on servers.

The fifth constraint dictates that a system leveraging RESTful APIs should be able to support a layered architecture. A layered architecture segregates the functional components into a number of hierarchical layers, where each layer is only aware of the existence of the adjacent layers and communicates only with those adjacent layers. For example, a client may interact with a proxy server, not the actual HTTP server, while not being aware of this fact. On the other end of the connection, a server processing and responding to client requests in the frontend may rely on an authentication server to authenticate those clients.

The final constraint, which is an optional constraint, is support for Code on Demand (CoD). CoD is the capability of downloading software from the server to the client, to be executed by the client, such as Java applets or JavaScript code downloaded from a web site and run by the client web browser.

Therefore, by providing appropriate, REST-compliant transport to a protocol in order to expose an API to the external world, HTTP makes that protocol or API RESTful.

Are you still wondering what is HTTP, REST and RESTful APIs ?

JSON – JavaScript Object Notation


Similar to XML, JSON is used to encode the data in the body of HTTP messages. As a matter of fact, the supported encoding is decided by the protocol used, not by HTTP. NETCONF only supports XML while RESTCONF supports both XML and JSON. Other APIs may only support JSON. Since XML was covered in Part 2 of this series, we will cover JSON in this part.

Unlike XML, that was developed to be primarily machine-readable, JSON was developed to be a human-friendly form of encoding. JSON is standardized in RFC 8259. JSON is much simpler than XML and is based on four simple rules:

1. Represent your objects as key-value pairs where the key and value are separated with a colon
2. Enclose each object in curly braces
3. Enclose arrays in square brackets (more on arrays in minute)
4. Separate objects or array values with commas

Let’s start with a very simple example – an IP address:

{“ip”: “10.20.30.40”}

The object here is enclosed in curly braces as per rule #2. The key (ip) and value (10.20.30.40) are separated by a colon as per rule #1. Keep in mind that the key must be a string and therefore will always be enclosed in double quotes. The value is also a string in the example since it is enclosed in double quotes. Generally, a value may be any of the following types:

◉ String: such as “Khaled” – always enclosed in double quotes
◉ Number: A positive, negative, fraction, or exponential number, not enclosed in quotes
◉ Another JSON object: shown in the next example
◉ Array: An ordered list of values (of any type) such as [“Khaled”,“Mohammed”,“Abuelenain”]
◉ Boolean: true or false
◉ null: single value of null

A very interesting visual description of value types is given here: https://www.json.org/.

Now assume that there is an object named address that has two child JSON objects, ip and netmask. That will be represented as follows:

{
  "address": {
    "ip": "100.100.100.100",
    "netmask": "255.255.255.255"
  }
}

Notice that the objects ip and netmask are separated by a comma as per rule #4.
What if the address object needs to hold more than one IP address (primary and secondary) ? Then it can be represented as follows:

{
  "address": [
    {
      "ip": "100.100.100.100",
      "netmask": "255.255.255.255"
    },
    {
      "ip": "200.200.200.200",
      "netmask": "255.255.255.255"
    }
  ]
}

In this example, address is a JSON object whose value is an array, therefore, everything after the colon following the key is enclosed in square brackets. This array has two values, each an JSON object in itself. So this is an array of objects. Notice that the in addition to the comma separating the ip and netmask objects inside each object, there is also a comma after the closing curly brace around the middle of the example. This comma separates the two values of the array.
And that’s about all you need to know about JSON !

Standards-based vs. Native RESTful APIs: RESTCONF & NX-API REST


As you have seen in the previous section, any RESTful protocol/API employing HTTP at the Transport Layer (of the programmability stack – NOT the OSI 7-layer model) will need to define three things:

1. What encoding(s) does it supports (XML, JSON, YAML, others) ?

2. How to construct a URI to target a specific resource ? A URI is a hierarchical way of addressing resources, and in its absolute form, a URI will uniquely identify a specific resource. Each protocol will define a different URI hierarchy to achieve that.

3. Which data models are supported and, combined with point number 1 above, will decide what the message body will look like.

RESTCONF is a standards-based RESTful API defined in RFC 8040. RESTCONF is compatible with NETCONF and is sometimes referred to as the RESTful version of NETCONF. This means that they can both coexist on the same platform without conflict. Although RESTCONF supports a single “conceptual” datastore, there are a set of rules that govern the interaction of RESTCONF with NETCONF with respect to datastores and configuration. While NETCONF support XML only, RESTCONF supports both XML and JSON. RESTCONF supports the same YANG data models supported by NETCONF. Therefore, a message body in RESTCONF will be model-based just as you have seen with NETCONF, with a few caveats. However, RESTCONF only implements a subset of the functions of NETCONF.

The architectural components of RESTCONF can be summarized by the 4-layer model in Figure 3. The 4 layers are Transport, Messages, Operations and Content. Just like NETCONF.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 3 – The RESTONF architectural 4-Layer model

Now to the RESTful part of RESTCONF. RESTCONF supports all HTTP methods discussed so far. The key to understanding RESTCONF then is to understand how to construct a URI to target a resource. While it is out of scope of this (very) brief introductory post to get into the fine details of the protocol, it is important to get at least a glimpse of RESTCONF URI construction, as it is the single most important factor differentiating the protocol right after its compatibility with NETCONF. The resource hierarchy for RESTCONF is illustrated in Figure 4.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Career

Figure 4 – Resource hierarchy in RESTCONF

The branch of this hierarchy that relates to configuration management and datastores is
API -> Datastore -> Data. A URI in RESTCONF has the general format of

https://device_address:port/API_Resource/Datastore_Resource/Resource-Path

Without getting into too much details, the Cisco implementation of RESTCONF uses the string restconf as the value of the API Resource and the string data as the value of the Datastore Resource. So on the DevNet IOS-XE Sandbox, for example, all RESTCONF URIs will start with https://sandbox-iosxe-latest-1.cisco.com:443/restconf/data/. In the next section you see how to configure a loopback address using a RESTCONF URI and a YANG data model.

Now on the other side of the spectrum are native RESTful APIs. Native RESTful APIs are vendor-specific and are usually platform specific as well. On example of a RESTful API that is widely used by the programmability community is NX-API REST that is exposed by programmable Nexus switches. NX-API REST is a RESTful API that uses HTTP request and response messages composed of methods, URIs, data models and status codes, like all other RESTful APIs. However, this API uses a Cisco-specific data model called the Management Information Tree (MIT). The MIT is composed of Managed Objects (MO). Each MO represents a feature or element on the switch that can be uniquely targeted by a URI.

When the switch receives an HTTP request to an NX-API REST URI, an internal Data Management Enginer (DME) running on the switch validates the URI, substitutes missing values with default values, where applicable, and, if the client is authorized to perform the method stated in the client request, the MIT is updated accordingly.

Similar to RESTCONF, NX-API REST supports payload bodies in both XML and JSON.

RESTful APIs and Python


The requests package has been developed to abstract the implementation of an HTTP client using Python. The Python Software Foundation recommends using the requests package whenever a “higher-level” HTTP client-interface is needed (https://docs.python.org/3/library/urllib.request.html).

The requests package is not part of the standard Python library, therefore it has to be manually installed using pip. Example 1 shows the installation of requests using pip3.7.

Example 1 Installing the requests package using pip

[kabuelenain@server1 ~]$ sudo pip3.7 install requests

Collecting requests

  Using cached https://files.pythonhosted.org/packages/51/bd/23c926cd341ea6b7dd0b2a00aba99ae0f828be89d72b2190f27c11d4b7fb/requests-2.22.0-py2.py3-none-any.whl

Requirement already satisfied: idna<2.9,>=2.5 in /usr/local/lib/python3.7/site-packages (from requests) (2.7)

Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.7/site-packages (from requests) (2018.10.15)

Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/local/lib/python3.7/site-packages (from requests) (3.0.4)

Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.7/site-packages (from requests) (1.24.1)

Installing collected packages: requests

Successfully installed requests-2.22.0

[kabuelenain@server1 ~]$

After the requests package is installed, you are ready to import it into your code. Using the requests package is primarily based on creating a response object, and then extracting the required information from that response object.

The simplified syntax for creating a response object is:

Response_Object = requests.method(uri, headers=headers, data=message_body)

To create an object for a GET request, in place of requests.method, use requests.get. For a POST request, use requests.post, and so forth.

Replace the uri parameter in the syntax with the target URI. The headers parameter will hold the headers and the data parameter will hold the request message body. The uri should be a string, the headers parameter should be a dictionary and the data parameter may be provided as a dictionary, string, or list. The parameter data=payload may be replaced by json=payload, in which case the payload will be encoded into JSON automatically.

Some of the information that you can extract from the Response Object is:

◉ Response_Object.content: The response message body (data) from the server as a byte object (not decoded).
◉ Response_Object.text: The decoded response message body from the server. The encoding is chosen automatically based on an “educated guess”.
◉ Response_Object.encoding: The encoding used to convert Response_Object.content to ◉ Response_Object.text. You can manually set this to a specific encoding of your choice.
◉ Response_Object.json(): The decoded response message body (data) from the server encoded in json, if the response resembles a json object (otherwise an error is returned).
◉ Response_Object.url: The full (absolute) target uri used in the request.
◉ Response_Object.status_code: The response status code.
◉ Response_Object.request.headers: The request headers.
◉ Response_Object.headers: The response headers.

In Example 2, a POST request is sent to the DevNet IOS-XE Sandbox to configure interface Loopback123. Looking at the URI used, you can guess that the Python script is consuming the RESTCONF API exposed by the router. Also, from the URI as well as the message body, it is evident that the YANG model used in this example is ietf-interfaces.yang (available at https://github.com/YangModels/yang/tree/master/vendor/cisco/xe/1731).

Example 2 POST request using the requests package to configure interface Loopback123

#!/usr/bin/env python3

import requests

url = 'https://sandbox-iosxe-latest-1.cisco.com:443/restconf/data/ietf-interfaces:interfaces/'

headers = {'Content-Type': 'application/yang-data+json',
    'Authorization': 'Basic ZGV2ZWxvcGVyOkMxc2NvMTIzNDU='}
payload = '''
{
  "interface": {
    "name": "Loopback123",
    "description": "Creating a Loopback interface using Python",
    "type": "iana-if-type:softwareLoopback",
    "enabled": true,
    "ietf-ip:ipv4": {
      "address": {
        "ip": "10.0.0.123",
        "netmask": "255.255.255.255"
      }
    }
  }
}
'''

Response_Object = requests.post(url,headers=headers,data=payload,verify=False)

print('The server response (data) as a byte object: ','\n\n',Response_Object.content,'\n')

print('The decoded server response (data) from the server: ','\n\n',Response_Object.text,'\n')

print('The encoding used to convert Response_Object.content to Response_Object.text: ','\n\n', Response_Object.encoding,'\n')

print('The full (absolute) URI used in the request: ','\n\n',Response_Object.url,'\n')

print('The response status code: ','\n\n',Response_Object.status_code,'\n')

print('The request headers: ','\n\n',Response_Object.request.headers,'\n')

print('The response headers :','\n\n',Response_Object.headers,'\n')

Example 3 shows the result from running the previous script.

Example 7-3 Running the script and creating interface Loopback123

[kabuelenain@server1 Python-Scripts]$ ./int-loopback-create.py

/usr/lib/python3.6/site-packages/urllib3/connectionpool.py:847: InsecureRequestWarning: Unverified HTTPS request is being made. Adding certificate verification is strongly advised. See: https://urllib3.readthedocs.io/en/latest/advanced-usage.html#ssl-warnings

  InsecureRequestWarning)

The server response (data) as a byte object: 

 b''

The decoded server response (data) from the server: 

The encoding used to convert Response_Object.content to Response_Object.text: 

 ISO-8859-1

The full (absolute) URI used in the request: 

 https://sandbox-iosxe-latest-1.cisco.com:443//restconf/data/ietf-interfaces:interfaces/

The response status code: 

 201

The request headers: 

 {'User-Agent': 'python-requests/2.20.0', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Type': 'application/yang-data+json', 'Authorization': 'Basic ZGV2ZWxvcGVyOkMxc2NvMTIzNDU=', 'Content-Length': '364'}

The response headers :

 {'Server': 'nginx/1.13.12', 'Date': 'Fri, 13 Nov 2020 11:00:28 GMT', 'Content-Type': 'text/html', 'Content-Length': '0', 'Connection': 'keep-alive', 'Location': 'https://sandbox-iosxe-latest-1.cisco.com/restconf/data/ietf-interfaces:interfaces/interface=Loopback123', 'Last-Modified': 'Fri, 13 Nov 2020 11:00:14 GMT', 'Cache-Control': 'private, no-cache, must-revalidate, proxy-revalidate', 'Etag': '"1605-265214-914179"', 'Pragma': 'no-cache'}

[kabuelenain@server1 Python-Scripts]$

As you can see, the status code returned in the server response message is 201 (Created) which means that the Loopback interface was successfully created. You may have noticed that the message body (actual data in the message) is empty, since there is nothing to return back to the client. However, the Location header in the response headers (highlighted in the example) returns a new URI that points to the newly created resource.

Wednesday 18 November 2020

Envisioning the ideal enterprise collaboration experience

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career

As remote work becomes the new normal for many enterprises, collaboration is being tested like never before. And few feel the pressure more than enterprise IT leaders.

In addition to delivering performance and reliability, they need to envision how their organizations can use technology to achieve results remotely. They’re searching for an ideal vision of a modern meeting and collaboration experience.

What does that look like? A hybrid cloud collaboration platform can provide an attractive answer. It’s a highly scalable, easy-to-access solution that offers substantial benefits.

To help formulate your strategy, here are areas where hybrid cloud collaboration can help you capture more value and maximize results.

Collaboration modernization doesn’t require ripping or replacing

Most enterprise organizations have sizable investments in collaboration technology or collaboration-adjacent solutions: endpoints, applications, PBX systems, and more.

As you seek to optimize your collaboration experience, remember that reshaping and reinvigorating your approach doesn’t require you to do away with these existing investments.

The ideal collaboration experience is seamless

Enterprise organizations employ an average of seven different collaboration platforms, such as calendar, IM, email, meetings, file sharing, customer contact center, and so on.

To streamline the user experience, you need to centralize and consolidate tools. It should be a unified experience, not a disparate one. Consistency is key. As you simplify your strategy, you’ll get stronger diagnostics and analytics, and reduce IT costs, too.

Remove friction from your user’s workflow

How many clicks does it take your users to get from, say, an Excel spreadsheet to video-chatting with their project partner? It’s an important number. There’s no such thing as too few clicks.

So how do you reduce the number of clicks? Integration. Embedding collaboration in everyday work applications like Microsoft Office 365, Epic, or ServiceNow allows users to reach out to colleagues without shifting focus or losing their flow.

Elevate collaboration through cloud and AI/ML

Connecting collaboration to the cloud allows you to accomplish amazing things via artificial intelligence and machine learning.

With cloud capabilities, your platform can automatically create meeting rooms when new tickets enter a tool such as ServiceNow.  Provide AI-enabled occupant counting for video rooms. Allow for hands-free collaboration and meeting room scheduling. Automatically output action items from a meeting. Or seamlessly pull up additional context as topics are discussed.

The possibilities are virtually endless.

Ensure crystal-clear audio and video

The ideal modern meeting experience has reliably great call quality. Delivering consistently great performance starts on internal networks. Intelligent networking and policy-based automation help ensure better results.

You should be mindful of your collaboration vendor’s infrastructure. Traffic flow between nodes within the provider cloud environment is also critical.

Start building your hybrid cloud collaboration experience

So how do you make your vision for ideal enterprise collaboration via hybrid cloud a reality?

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Career

Cisco Performance IT is a methodology that helps enterprise organizations conceive a centralized approach to collaboration—and builds a roadmap to take them there. It delivers increases in efficiency that cost-justify the investment in collaboration. These efficiencies can even enable the investment to pay for itself.

Performance IT can help you find ways to leverage your existing investments and maximize their value as your collaboration evolves.

Tuesday 17 November 2020

Cisco and DHL Partner to Develop a Blockchain Solution

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Career

Cisco’s services supply chain provides advanced hardware replacement to customers in over 120 countries and is a key service delivery component in support of Cisco’s service business.

Although Cisco owns the inventory and processes for this massive supply chain, Cisco does not own or operate a single warehouse or truck.  The entire supply chain is managed by a worldwide network of third party logistics providers, freight forwarders, and customs brokers who work in partnership with Cisco to deliver world-class support to Cisco Service’s end customers.

The service delivery and margins for this supply chain match up well against the industry.  However, the Cisco team, working together with their supply chain partners, is always looking to improve the end customer experience while also reducing costs.  One key focus area for achieving this is through improved systems integration with supply chain partners to improve the quality and timeliness of service shipment data for customer service events.

Cisco and its partner DHL worked closely together to develop a blockchain based gateway for integrating the two companies’ systems for service event dispatch and track and trace.  The architecture this team developed includes different blockchain platforms (Hyperledger Fabric and DHL BLESS) running on different cloud service providers as well as using a Splunk instance with the Splunk App for Hyperledger Fabric for visibility and monitoring.  The solution was also hosted jointly between AWS and Azure.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Career
Screenshot of blockchain gateway and its scalability to multiple supply chain partners

The combined Cisco and DHL team proved out the ability to leverage blockchain for large systems integration solution that can scale to multiple platforms and dozens of partners, all hosted on a shared blockchain for real connectivity.

This has the following benefits:

◉ Improved time to onboard partners compared to current B2B solution.

◉ Ability to make a single evolution of consortium and replicate across all partners in supply chain at once (change once vs change many).

◉ Potential to reduce current run rate of B2B data errors and exceptions.

◉ Evolve current architecture to future standards.

◉ Enhanced reporting capabilities.

◉ Cost reduction compared to current B2B infrastructure & support.

◉ Expanded data to improve the customer service experience.

◉ Future platform on which to build value added services.

Although there is more work to do for the supply chain industry as a whole to set standards and develop blockchain based solutions, this partnership between Cisco and DHL demonstrated both the viability of the technology as well as the potential to further improve supply chain efficiency while also improving customer experience.

This partnership has been an excellent example of how strategic partners can drive innovation together.

Saturday 14 November 2020

Under Analytics

Cisco Prep, Cisco Tutorial and Material, Cisco Certifications, Cisco Guides, Cisco Career

Back when network management was booming in the early 90’s, the whole idea seemed straightforward. System administrators would speak of endpoints on the network as being “under management” or conversely “unmanaged.” There seemed to be a place for everything and looking back now at those times, enterprises seemed so simple compared to today. Maybe simple is not the right term, maybe they just seemed more orderly compared to the modern network landscape.

At some point, hackers showed up and names like “under management” or “unmanaged network elements” made little difference to them. I remember security folks in the early days joking that SNMP (Simple Network Management Protocol) stood for “Security Not My Problem.” An insecure network meant that you had an insecure business! The experienced security architect knows that whether the system is under management, under someone else’s management, or completely unmanaged, if that system is part of the business, it is still their job to secure it. To put it another way, while management of systems can span certain, more specific information systems, security must always be as wide as the business.

I would like to suggest a new term and concept for our vocabulary and that is “under analytics.” I like to think of this as a conceptual means to discuss if areas of your digital business have enough visibility for continuous monitoring of its integrity. Why not just call it “under management?” Well, because more and more these days, you are NOT the one managing that area of the network. It might be the cloud service provider managing it, but it is still your problem if something gets hacked. You could even then speak of observable domains as having certain requirements that satisfy the type of analytics you would like to perform.

There are many types of observational domains to consider so let’s talk about some here. Back in the day, there was just your enterprise network. Then when folks connected to the internet, the concepts of internal and external and even the DMZ networks were referenced as observable network domains. These days, you have to deal with public cloud workloads, Kubernetes clusters, mobile devices, etc. Let’s just say that you can speak of having any amount of observable domains for which you require telemetry that will get you the visibility required to detect the most advanced threat actors in those domains.

For each of these observable domains, there will need to be telemetry. Telemetry is the data that represents changes in that domain that feeds your behavioral analytics outcomes. You could make a list of the competency questions you would want to answer from these analytical outcomes.

◉ Are there any behaviors that suggest my systems have been compromised?

◉ Are there any behaviors that suggest some credential has been compromised?

◉ Are there any behaviors to suggest there is a threat actor performing recognizance?

Cisco Prep, Cisco Tutorial and Material, Cisco Certifications, Cisco Guides, Cisco Career
My suggestion is that you begin with these questions and then hold security analytics to them to see if they are competent to answer them daily, weekly, monthly, etc.

From there, you can go one step further and start to consider and look into scenarios like the following:

◉ We have a new partner network, is it “under analytics?”

◉ We have a new SaaS service, is it “under analytics?”

◉ This company has a new cloud deployment, do we know if it is “under analytics?”

◉ What part of our digital busines is not “under analytics?”

How well do you know your digital business behavior when it is 100% without compromise? How would you even go about answering this? The truth is, you really do need to get to this level because if you don’t, threat actors will. Even if parts of the business use SaaS products, while parts of the network are using Infrastructure as a Service (IaaS), you can still set the requirements that there must be a sufficient amount of telemetry and analytics that help you understand the answers to these questions above. Your business must always remain “Under analytics” and only then will you be one step ahead of your attackers.

Wednesday 11 November 2020

Tetration Updates – New capabilities for microsegmentation and workload security

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep

Cisco Tetration release 3.4 expands support for micro-segmentation, workload and container security

Cisco Tetration, a leader in micro-segmentation and workload security, announces significant new enhancements, available now, that help security architects achieve the protection required for today’s heterogeneous multicloud environments.

One of the key challenge’s businesses face is how to provide a secure infrastructure for applications without compromising business agility.  With the rise of cloud usage, containers and microservices architectures, you need a solution that brings security closer to your applications using a new firewall type of enforcement that surrounds each workload.  Many companies like Per Mar Security Services choose Tetration to be the foundation of their zero-trust and broader cybersecurity plan, protecting their critical applications from compromise.

This latest Tetration release includes features that support new microsegmentation capabilities, workload protection, sensor support for new operating system versions, platform features required for enterprise customers and much more.

Enhancements include:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep
Microsegmentation:

Enhanced usability and management of microsegmentation.  Granular control to specify which workloads should receive what policy elements, making policy definition, generation, and enforcement much more flexible and customizable to your environment

Latest versions and enhancements across Kubernetes and OpenShift orchestration platforms and support for microsegmentation policy enforcement on ingress controllers such as HAProxy or Nginx .

Application dependency mapping updates to speed policy generation.  (ADM offers forensic understanding of applications/workloads and their complex interdependencies)

Compromised state awareness: alerting/ policy changes after a workload or endpoint is detected as compromised with flows to a known threat.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep
Workload Protection:

Enhanced vulnerability detection that leverages, in addition to NIST CVE (Common Vulnerabilities and Exposures) database, the latest threat intelligence from Operating System vendors to ensure accuracy and the most up to date risk profile for applications in your environment.   

New MITRE-based attack detection techniques and tactics plus several new anomalous Windows processes alerts.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep
Usability and operational improvements

New and improved user interface to better visualize and manage application scopes, workloads that are part of those applications and associated hierarchies.

Improved visualization of policy version differences to easily understand what rules were added or removed and also filter for specific rules based on number of parameters.

Resiliency features including new mode of continuous data backup, new backup and restore workflows, the Federation of multiple Tetration clusters for a high degree of scalability and availability.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep
Software sensors:

OS updates: Support for the latest versions of key operating systems our customers care about (RHEL, CentOS, Oracle Linux, Ubuntu, plus added support for IBM AIX for legacy applications in key verticals like healthcare and financials.

Easily transition from deep visibility to policy enforcement to speed the time to microsegmentation

Enhanced monitoring and management features for better sensor visibility and usability in key areas like monitoring, installation, upgrade status.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Exam Prep
3rd Party Ecosystem Partners

ServiceNow CMDB integration for ingesting CI (Configuration Item) attributes to provide more context to help define inventory filters, tag workloads, define policies, and visualize flow traffic.

Native support for Workload AD (Windows Domain Controller) for rich user and workload context to enhance policy definition, inventory filters and visualize flow traffic.

Tuesday 10 November 2020

Experience the Future with Cisco and the Internet of Things

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep

It’s the year 1950, and I’m asking you what you imagine technology would be in 70 years; what would you say? My guess is you proceed to list out some science-fiction-like answers such as the existence of space exploration programs, maybe artificial intelligent robots, or perhaps the invention of some all-knowing neural network that enlightens humankind through accessible information. While such ideas may have been on the cusp of science-fiction at the time, it’s incredible to realize that we are in the generation where many of these innovations not only exist but are customer-ready today!

Oh, and by the way, remember that “all-knowing neural network” you had mentioned? This is what we presently refer to as the internet and, of course, is what you are using to access this blog at this very moment. Despite how much of a technological breakthrough the internet was during its invention in 1983, it has become such an everyday tool, and it just doesn’t spark the same excitement as it once did.

Let me be that unwarranted catalyst and re-ignite that internet excitement by introducing a new generation of internet-powered technology. A generation of technology that can harness the limitless knowledge of the internet and engrain it into inanimate objects connecting us in a way never thought possible. I am referring to the Internet-of-Things (IoT), a technological innovation spearheaded by Cisco and its state-of-the-art Application Hosting on the Catalyst Access Points (AP) platform.

What is the Internet of Things?

The Internet-of-Things is a concept where a wireless network is leveraged for communication with smart devices to accomplish tasks in a more simplified, efficient, and often automated manner. In fact, many IoT products probably have already found their way into your home already. These products come in all shapes and sizes, but some examples could be a voice-activated speaker such as an Amazon Alexa, a mobile application-controlled thermostat such as a Nest Thermostat, a motion-activated doorbell camera such as the August Doorbell Cam, or more excitingly, a voice triggered music playing salt dispenser such as the SMALT!

Other than the salt-dispenser (which actually exists), these are all products that, due to their simplicity and usefulness, have become seamlessly integrated into many of our lives.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep
Figure 1: Modern Internet-of-Things products leveraging a wireless network.

So, if IoT already exists, what is Cisco’s role in this field?

Think about how IoT products work, and you’ll realize it requires a robust wireless network to connect the IoT endpoints to the information it needs to operate. While a single wireless router can easily accomplish this for a typical household size deployment, the challenge is how we can execute this at an enterprise level, where hundreds to thousands of IoT devices must work together to form a single solution. Without a proper management infrastructure to provide visibility, serviceability, and security, IoT at scale can be a complete nightmare to deploy and manage.

Cisco’s Internet of Things Solution


Application Hosting on the Catalyst Access Points and Cisco’s intent-based networking platform, Cisco DNA Center is the solution that solves this problem. This integration allows users to leverage Cisco DNA Center to deploy custom IoT applications directly onto docker containers within Cisco’s Catalyst Wi-Fi 6 access points. This integration with Cisco DNA Center solves the problem of visibility and serviceability at scale by taking on the applications’ life cycle manager’s role and allowing users to take advantage of their existing Cisco wireless infrastructure for IoT communication.

During Day 0, a user simply uploads the IoT application onto Cisco DNA Center, and from there, can choose what locations to deploy the application. From Day 1, applications throughout an entire network can now be easily monitored and maintained through a GUI and even upgraded by simply uploading then deploying a newer version of the IoT application. With this integration with Cisco DNA Center, IoT application management has never been easier!

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep
Figure 2: Cisco DNA Center’s simplistic IoT application deployment workflow.
 
After deploying the IoT application onto the access points, the application then begins communication with its application server, leveraging each access point as an IoT gateway to communicate with surrounding IoT devices. This communication with surrounding IoT devices happens through an IoT USB connector inserted into the Cisco Catalyst access point, which can broadcast anything from Zigbee to BLE to vendor-specific proprietary RF protocols, providing true versatility to IoT solutions possible.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep
Figure 3: Application Hosting on the Catalyst Access Points IoT Topology.

What about the IoT Application itself?


This is where things get exciting! Cisco is now open for partnerships with third-party IoT development companies, providing them with the opportunity to integrate their IoT solutions with Catalyst access points. While the development of IoT applications may not be a simple feat, Cisco has streamlined the process by creating an entire website, DevNet, with the sole purpose of supporting third-party application development. With DevNet, you now have an intuitive step-by-step guide that will teach you how to go from writing a basic “Hello World” application to creating an innovative end-to-end IoT solution capable of solving real-world problems!

The marketplace of IoT Technology


Once the application has been developed, as a partner, you can then join the Solution Partner Program, which allows you to post your IoT solution directly onto DevNet. Essentially, Cisco aims to create a whole marketplace of ready-for-deployment IoT solutions, providing customers with a one-stop-shop to browse, discover, then deploy IoT solutions that best fit their niche business needs.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Prep
Figure 4: Cisco Solution Partner Program.

Together, Application Hosting, Cisco DNA Center, and DevNet form a truly seamless IoT experience that allows partners to materialize, and customers deploy any IoT envisioned solution through Cisco’s powerful yet simplistic wireless infrastructure. And that is something that anyone could have predicted!

Saturday 7 November 2020

Invest In Your Most Critical Assets: People

Cisco Prep, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

If you asked our customers and partners what their most important asset is these days you’d get a variety of answers. Everything from infrastructure to real estate to mission-critical applications. To Bell Canada, one asset you can’t overlook is your people. Their philosophy is investing in people will always pay positive dividends. While investing in people may seem like common sense, Bell has taken this to the next level and has streamlined and optimized the development of their Sellers and Solutions Architects. How have they done this you ask? One way is by leveraging Cisco’s premiere architecture enablement platform, the Cisco Black Belt Academy.

Bell and Cisco. A Long-Standing Partnership.

Bell Canada, a leading Canadian provider for telecommunication services, has been a Gold Partner with Cisco Canada for decades and has always been amongst the top tier partners amongst our roughly 1,800 partners across the country. Year after year, Bell and Cisco have had significant joint success, recognized for consecutive years during Cisco Partner Summit. In 2018, Bell was recognized by Cisco for being the #1 overall partner in the Americas and has been Canada’s Partner of the Year for two years running.

“Over the course of many years, Bell has been well aligned to Cisco because of our tremendous synergy as a value-added reseller – a relationship that covers many different domains of the business. Our mutual expertise includes Network, Security, Cloud, Voice/Unified Communications and the Internet of Things,” commented Errol Fernandes, who leads Bell’s Enterprise Architecture teams as he addresses the partnership.

Ever evolving technology and staying current

As the Greek philosopher Heraclitus said, “The only constant is change.” Little did Heraclitus know that this would be the theme of our decade. The rate at which our customers’ needs continue to evolve is unprecedented. And as we have all witnessed, technology vendors and providers need to adapt quickly to continuously deliver the same best-in-class experience that customers have come to expect.

Fernandes reminds us “our technical team prioritizes staying current on the latest technology, and that includes the most recent Cisco software solutions. The Bell team has always been extremely diligent at getting the standardized certifications that Cisco offers (CCIE, CCNA etc.), and with  Cisco’s continual acquisition approach – to expand and integrate the latest technologies to solidify each portfolio – our technical sales team of almost 300 resources always needs to upskill.” This is where the Cisco Black Belt Academy aims to help in keeping partners current.

Cisco Prep, Cisco Guides, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

Cisco Black Belt and Developing Expertise


The Cisco Black Belt program is an enablement framework consisting of carefully curated training content, that Cisco employs in ensuring its Sales and Technical teams are well versed on the latest technologies and solutions. This framework, has allowed Cisco’s Channel Partners like Bell to integrate directly into their existing training programs. Bell in many ways, has led this charge and has rolled out Black Belt to train its roughly 300 technical and sales team members.

By leveraging the Cisco Black Belt program, Bell has been able to carefully create custom development plans that align to specific roles within the Technical Sales team within Bell Business Markets. These development plans were curated early in 2020 and went through multiple planning revisions before a successful implementation of a pilot program.

In this pilot, a group of 16 Solution Architects from various practices completed role-specific training content. At the end of the pilot the solutions architects gave a 4 star+ rating for overall user experience and content relevancy; which is outstanding for this type of pilot. Rami Al Saber, one of the pilot participants says “I believe it is a great tool for various sales teams and technical sales teams to try, as it has great learning tools.”

Because of the dynamic nature of the framework and the practical way individuals are certified – typically through proof of concepts, a solution sale or customer design – Bell is confident in the quality of the training and enablement. By partnering with Cisco and investing in their people with the Black Belt Academy, Bell is very well positioned to navigate through these uncertain times, and accelerate their business.