Monday, 30 July 2018

Top 7 Multicloud Initiatives – Delivering on the Multicloud Promise

According to IDC, by 2021, enterprises’ spending on cloud services and cloud-enabling hardware, software, and services will more than double to over $530 billion, leveraging a diversifying cloud environment that is 20% at the edge, more than 15% specialized (non-x86) compute, and more than 90% Multicloud. A high majority of enterprise IT organizations out there want to adopt multicloud now. It is thrilling to see enterprises around the world undertaking the key initiatives necessary to transform and stay ahead of the game in a multicloud era. Let’s examine what it takes to deliver the promise behind these initiatives.

Multicloud Journey – Before the “How”, Consider the “Why” and “What”


Embracing a multicloud world is important to increasing the pace of innovation for every company today. But it can be daunting to even large IT organizations to embark upon the multicloud journey due to seemingly never-ending complexity, fragmented solutions that got implemented over time, and no consistency or data control. IDC found that about 89% of enterprises today do not have an actionable and optimized plan for cloud. Faced with CIO directives and urgent timelines, IT teams today are under extreme pressure to claim progress in adopting multicloud. Before getting into the “how” or even a POC, enterprise IT oranizations should consider the “why” and the “what” for their selected initiatives.

The “why” is about aligning the selected initiatives to needed business outcomes. For example, is IT management looking for cost reduction, agility, much-needed application technology enhancements, or on-demand and efficient scaling of IT, a competitive differentiator, or expanding the business?

The “what” is about capturing your multicloud requirements, prioritizing them, identifying dependencies, and scoping what you want to accomplish and by when. This involves coming up with distinct but connected initiatives that can be phased or accomplished in a complementary fashion.

Cisco’s multicloud approach and Multicloud Portfolio helps enterprise IT teams that are trying to take on a multicloud journey to determine the “why” and “what” and produce definitive multicloud requirements and an actionable plan.

Cisco Study Material, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Here are key multicloud initiatives that enterprises around the world are considering. All these initiatives may not apply to every enterprise, and, in general, an enterprise may require additional initiatives based on their cloud adoption maturity and specific application needs.

1. Connect DC/Campus, CoLo, and Cloud


When enterprise IT organizations decide to a public cloud as part of their IT technology mix, this initiative becomes important. It involves connecting data centers or campuses directly to the cloud or via a colocation option. Secure connectivity to a public cloud includes considering having a CoLo in between DCs and the cloud for various reasons like backup, data sovereignty, and a high-speed connection to cloud infrastructure on the backend.

2. Connect Branches Direct-to-Cloud


When enterprise IT organizations with branches decide to have significant numbers of applications migrate to a public cloud or subscribe to SaaS offers, the branch’s connectivity direct to the cloud becomes important in delivering the best application user experience. Also, with the increased needs of edge computing and local analytics, it is paramount to have better and direct connectivity to the cloud applications from the edge. It essentially involves connecting branches directly to the public cloud application environments (including SaaS applications) using SD-WAN solutions with the high-speed Internet service providers locally available to each branch, as well as DNS security.

3. Build and Manage a Hybrid Cloud


Modernization has been a key driver for IT organizations, with various projects supporting it, including the adoption of a public cloud in the IT mix and transforming the consumption of IT on-premises into a cloud-like experience. According to a recent IDC survey, 87% of enterprises that are using the cloud are taking steps towards creating and managing a hybrid cloud. A hybrid cloud is essentially an application infrastructure configuration that has both an on-premises private cloud and a public cloud for deploying applications in a hybrid model. An example is when the data tier is running on-premises with the web and app tier running on the public cloud.

4. Migrate and Manage Applications to Public Cloud


This is the most common initiative that enterprise IT management is asking their IT engineering and LOB engineering teams to tackle, and it has the potential to be the riskiest journey that IT can take. Many enterprises migrated applications to a public cloud and then brought them back on-premises due to various reasons they did not anticipate. This initiative requires careful selection with dependency analysis, meticulous planning, and end-to-end management of the applications to be migrated to a public cloud.

5. Manage Cloud-Native Applications in Public Clouds


With the onset of mature container technology and proliferation of cloud services, enterprise IT organizations and LOBs are driving the creation of new cloud-native applications or re-platforming existing applications to not only run the application better with scale at every microservice that is part of the application but to leverage cloud-native services that the application can use, such as auto-scaling of underlying Kubernetes on-prem as well as on the public cloud, serverless and cloud-agnostic application environments, and much more. Traditionally such capabilities required multiple management tools, but now auto-scaling is an attractive way to run the applications. This initiative requires bringing together networking, security, analytics and management for the cloud-native apps to span both on-premises and public cloud.

6. Burst Applications Into Public Cloud


Enterprise IT organizations are familiar with this strategy of extending on-premise capacity (aka bursting) for certain applications from the data center or private cloud to a public cloud infrastructure on-demand for application runtime needs. A good example of this can be found with retailers that have on premise ecommerce processing applications, but due to seasonal demands, that application footprint and sale is not enough to manage the ecommerce demand. This initiative also is becoming common among the emerging artificial intelligence (AI) applications that burst into the designated public cloud compute infrastructure, especially for analyzing the “hot” data at the edge along and expanding compute-crunching needs.

7. Optimize SaaS Application Connectivity and Security


Along with public cloud infrastructure adoption, software-as-a-service (SaaS) offers the enterprise a tremendous opportunity to realize an application’s business benefits by “renting” the application use vs. stretching to handle the management responsibilities of running it, updating it, etc. themselves.

Over the last 5+ years, SaaS delivery models for applications have demonstrated their ease of use, breadth, and affordability of packaged applications in a fraction of the time and for a fraction of the cost of traditional models. But SaaS also changes the role of IT. Enterprise IT organizations can no longer guarantee that SaaS providers will meet IT’s compliance standards or use any leverage to negotiate better terms and conditions. Also, IT can do little to ensure the performance of SaaS offers from various enterprise offices, which rely on local Internet service providers – and the unpredictable internet access networks—who can impact the consistency of SaaS delivery, even in regions with highly developed Internet infrastructure. This initiative uses SD-WAN capability to optimize the network path and keep it optimized.

Below is an example of how Cisco’s multicloud approach addresses the “how” in order to deliver on the promise of multicloud when it comes to an organization’s initiative to migrate applications to a public cloud.

Cisco Study Material, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

By offering a simplified approach to answering “how,” our multicloud approach helps organizations understand the capabilities needed for their chosen initiatives in order to easily and confidently select the foundational products needed for the design and implementation stages of their multicloud initiatives.

Sunday, 29 July 2018

Render your first network configuration template using Python and Jinja2

We all know how painful it is to enter the same text in to the CLI, to program the same network VLANs, over, and over, and over and over, and over…. We also know a better way that exists, with network programmability, but this solution could be a few years out before your company adopts the newest network programmability standards.  What are you to do???

Cisco Guides, Cisco Study Material, Cisco Learning, Cisco Tutorial and Material

Using Python and Jinja2 to automate network configuration templates is a really useful way to simplify repetitive network tasks, that as engineers, we often face on a daily basis. In using this alternative method to automate our tasks we can remove the common error mistakes experienced in the copying/pasting of commands into the CLI (command line interface). If you are new to network automation, this is a fantastic way to get started with network programmability.

Firstly, let’s cover the basic concepts we will run over here.

◈ What are CLI Templates? CLI templates are a set of re-usable device configuration commands with the ability to parameterize select elements of the configuration as well as add control logic statements. This template is used to generate a device deployable configuration by replacing the parameterized elements (variables) with actual values and evaluating the control logic statements.
◈ What is Jinja2? Jinja2 is one of the most used template engines for Python. It is inspired by Django’s templating system but extends it with an expressive language that gives template authors a more powerful set of tools.

Prerequisites: 


Jinja2 works with Python 2.6.x, 2.7.x and >= 3.3. If you are using Python 3.2 you can use an older release of Jinja2 (2.6) as support for Python 3.2 was dropped in Jinja2 version 2.7. To install this use pip.

pip install jinja2

Now we have Jinja2 installed let us take a quick look at this with a simple “Hello World” example with Python. To start with, create a Jinja2 file with “Hello World” inside (I am saving this into the same directory I am going to write my python code in). A quick way to create this file is with echo.

echo "Hello World" > ~/automation_fun/hello_world.j2

Now let us create our python code. We import Environment and FileSystemLoader, which allows us to use external files with the template. Feel free to create your python code in the way you feel is best for you. You can use the python interpreter or an IDE such as PyCharm.

from jinja2 import Environment, FileSystemLoader

#This line uses the current directory
file_loader = FileSystemLoader('.')

env = Environment(loader=file_loader)
template = env.get_template('hello_world.j2')
output = template.render()
#Print the output
print(output)

Use the following command to run your python program.

STUACLAR-M-R6EU:automation_fun stuaclar$ python hello_template.py
Hello World

Congratulations, your first template was a success!

Next, we will look at variables with Jinja2.

Variables With Jinja2


Template variables are defined by the context dictionary passed to the template. You can change and update the variables in templates provided they are passed in by the application. What attributes a variable has depends heavily on the application providing that variable. If a variable or attribute does not exist, you will get back an undefined value.

Cisco Guides, Cisco Study Material, Cisco Learning, Cisco Tutorial and Material

In this example, we will build a new BGP neighbor with a new peer. Let’s start by creating another Jinja2 file, this time using variables.  The outer double-curly braces are not part of the variable, what is inside will be what is printed out.

router bgp {{local_asn}}
 neighbor {{bgp_neighbor}} remote-as {{remote_asn}}
!
 address-family ipv4
  neighbor {{bgp_neighbor}} activate
exit-address-family

This python code will look similar to what we used before, however, we are passing three variables

from jinja2 import Environment, FileSystemLoader
#This line uses the current directory
file_loader = FileSystemLoader('.')
# Load the enviroment
env = Environment(loader=file_loader)
template = env.get_template('bgp_template.j2')
#Add the varibles
output = template.render(local_asn='1111', bgp_neighbor='192.168.1.1', remote_asn='2222')
#Print the output
print(output)

This will then print this output, notice that as we have repetitive syntax (the neighbor IP address), the variable is used again.

STUACLAR-M-R6EU:automation_fun stuaclar$ python bgp_builder.py
router bgp 1111
 neighbor 192.168.1.1 remote-as 2222
!
 address-family ipv4
  neighbor 192.168.1.1 activate
exit-address-family

If we have some syntax that will appear multiple times throughout our configuration, we can use for loops to remove redundant syntax.

For Loops with Jinja2


The for loop allows us to iterate over a sequence, in this case, ‘vlan’. Here we use one curly brace and a percent symbol. Also, we are using some whitespace control with the minus sign on the first and last line.  By adding a minus sign to the start or end of a block the whitespaces before or after that block will be removed. (You can try this and see the output difference once the Python code has been built). The last line tells Jinja2 that the template loop is finished, and to move on with the template.

Create another Jinja2 file with the following.

{% for vlan in vlans -%} 
    {{vlan}}
{% endfor -%}

In the python code, we add a list of vlans.

from jinja2 import Environment, FileSystemLoader

#This line uses the current directory
file_loader = FileSystemLoader('.')
# Load the enviroment
env = Environment(loader=file_loader)
template = env.get_template('vlan.j2')
vlans = ['vlan10', 'vlan20', 'vlan30']
output = template.render(vlans=vlans)
#Print the output
print(output)

Now we can run with python code and see our result.

STUACLAR-M-R6EU:automation_fun stuaclar$ python vlan_builder.py
vlan10
vlan20
vlan30

All of the code for these examples can be found on my GitHub https://github.com/bigevilbeard/jinja2-template

Friday, 27 July 2018

Python Scripting APIs in Cisco DNA Center Let You Improve Effectiveness

Just before I left for Cisco Live US, I was given the chance to work with the APIs on Cisco DNA Center. Having never used Cisco DNA Center this was a quick learning curve, but once I started I could see some great possibilities and some more coding fun to be had! Once back from Cisco Live US, where I learned even more about DNA Center (an awesome experience), I was excited to expand my knowledge and leverage some fun python code using the APIs that DNA Center has to offer.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

Dude, where is my DNA Center sandbox?


All the Python code you are about to see and learn about can be used on the DNA Center Always-On Sandbox. I used this for the creation of the code and documentation for this blog post and testing/building this code. This DevNet Sandbox lets you:

◈ Access at any time without making a reservation or using VPN connection
◈ Learn the DNA Center GUI or experiment with the REST API
◈ Access a pre-configured network topology running on genuine Cisco hardware

Because this sandbox is always available to all users, any other user may potentially overwrite your work at any time. The other caveats to an always-on DNA Center sandbox are… you cannot configure the network or devices and you cannot activate and enforce policy on network devices. I should also note there is other DNA Center sandbox’s that are reservable which provides your own private lab environment for the duration of the reservation.

Network device’s script, simple and easy to create


By the end of this blog post, you will learn:

◈ How to use the DNA Center APIs
◈ How to use the DNA Center APIs in a Python script

I have started with a simple Python script.  The Python script uses the DNA Center APIs to get device information. The APIs provide a list of all of the network devices the DNCA controller knows about and all of their attributes. For example, hostname, serial platform type, software version, uptime, etc. You can either get all of the devices or a subset. The script print to console using PrettyTable is a simple Python library designed to make it quick and easy to represent tabular data in visually appealing ASCII tables.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Study Materials

DNA Center Sandbox Network Topology, access anytime

Using DNA Center APIs


By looking at the API Catalog within DNA Center you can see this contains documentation about each API call, including the request method and URL, query parameters, request header parameters, responses, and schema, along with ways to preview or test the request.

Pre-Requisites


In order to run the code featured here, you must have Python 3.6 installed. We will use the following Python packages listed below.

◈ requests

We will use this to make HTTP requests

◈ prettytable

This will be used to generate ASCII tables in Python

We must also import HTTPBasicAuth from requests.auth. This can be all installed by using the requirements.txt file on the GitHub repo (link below).  Use pip to install the requests libraries as shown below:

pip install -r requirements

Authentication


The DNA Center APIs use token-based authentication. Token-based authentication works by ensuring that each request is accompanied by a signed token which is verified for authenticity and only then responds to the request. This POST function logs in retrieves a token the request and returns the response body.

def dnac_login(host, username, password):
    url = "https://{}/api/system/v1/auth/token".format(host)
    response = requests.request("POST", url, auth=HTTPBasicAuth(username, password),
                                headers=headers, verify=False)
    return response.json()["Token"]

The next Python function uses the network-devices API. As mentioned above the APIs provide a list of all of the network devices the DNCA controller knows about and all of their attributes. Here we are using the GET Method. GET is used to request data from a specified resource, GET is one of the most common HTTP methods.

def network_device_list(dnac, token):
    url = "https://{}/api/v1/network-device".format(dnac['host'])
    headers["x-auth-token"] = token
    response = requests.get(url, headers=headers, verify=False)
    data = response.json()

Printing with prettytable module


Now we have all our information, you can make this presentable and readable using the python module prettytable to create one table with headers. This holds the ‘Hostname’, ‘Platform Id’, ‘Software Type’, ‘Software Version’, ‘Up Time’ (you can also add serial number, MAC address, management IP address….etc)

dnac_devices = PrettyTable(['Hostname','Platform Id','Software Type','Software Version','Up Time' ])
dnac_devices.padding_width = 1()

As part of the network-device API, the Python script is connecting to DNA Center and using a loop for iterating over a sequence querying for the information, selecting the data required from the returned output, and then populating the table.

for item in data['response']:
        dnac_devices.add_row([item["hostname"],item["platformId"],item["softwareType"],item["softwareVersion"],item["upTime"]])

Running the code


Testing this out against the DNA Center in the DevNet Sandbox, we can see the following output we requested printed a clear table format.

python get_dnac_devices.py
+-------------------+----------------+---------------+------------------+-----------------------+
|      Hostname     |  Platform Id   | Software Type | Software Version |        Up Time        |
+-------------------+----------------+---------------+------------------+-----------------------+
| asr1001-x.abc.inc |   ASR1001-X    |     IOS-XE    |      16.6.1      | 180 days, 19:21:43.97 |
|  cat_9k_1.abc.inc |   C9300-24UX   |     IOS-XE    |      16.6.1      | 180 days, 20:20:17.26 |
|  cat_9k_2.abc.inc |   C9300-24UX   |     IOS-XE    |      16.6.1      | 180 days, 20:14:43.95 |
|   cs3850.abc.inc  | WS-C3850-48U-E |     IOS-XE    |     16.6.2s      |  177 days, 7:33:46.98 |
+-------------------+----------------+---------------+------------------+-----------------------+

Adding additional attributes required?


No problem! Let now say for example we wanted to expand on this and added some additional (or we could replace/remove) attributes to our information. This is really simple to do. In our code we only need to update two places, the header and the additional attribute we want to parse from DNA Center from the script. Let’s add the devices serial number and management IP address.

dnac_devices = PrettyTable(['Hostname','Platform Id','Software Type','Software Version','Up Time', 'Serial Nu', 'MGMT IP' ])
dnac_devices.padding_width = 1

for item in data['response']:
        dnac_devices.add_row([item["hostname"],item["platformId"],item["softwareType"],item["softwareVersion"],item["upTime"], item["serialNumber"], item["managementIpAddress"]])

Now, let’s run the Python script once more and see the additional attributes we added.
python get_dnac_devices.py

+-------------------+----------------+---------------+------------------+-----------------------+-------------+      Hostname     |  Platform Id   | Software Type | Software Version |        Up Time        |  Serial Nu  |   MGMT IP   |
+-------------------+----------------+---------------+------------------+-----------------------+-------------+
| asr1001-x.abc.inc |   ASR1001-X    |     IOS-XE    |      16.6.1      | 180 days, 19:21:43.97 | FXS1932Q1SE | 10.10.22.74 |
|  cat_9k_1.abc.inc |   C9300-24UX   |     IOS-XE    |      16.6.1      | 180 days, 20:45:36.37 | FCW2136L0AK | 10.10.22.66 |
|  cat_9k_2.abc.inc |   C9300-24UX   |     IOS-XE    |      16.6.1      | 180 days, 20:40:03.91 | FCW2140L039 | 10.10.22.70 |
|   cs3850.abc.inc  | WS-C3850-48U-E |     IOS-XE    |     16.6.2s      |  177 days, 7:33:46.98 | FOC1833X0AR | 10.10.22.69 |
+-------------------+----------------+---------------+------------------+-----------------------+

Conclusion, automate all the things


We can all acknowledge that manual processes are the adversary of quick value delivery, high productivity, and security. Automation isn’t only about making tasks quicker though. It also allows the creation of repeatable environments and processes as we have seen above with this Python script. Anyone on your team could run this script, no more logging support tickets for information about network devices and their current images or device types or logging into every device and running the same CLI commands, just run the automated script.

Wednesday, 25 July 2018

How to Use the Plug and Play API in DNA Center – Part 2

Continuing the PnP story…


The first blog in this series gave an overview of network Plug and Play (PnP) and how it has evolved in DNA Center.   It showed a very simple workflow to provision a device with a configuration template with a variable called “hostname”.

One of my customers had a requirement to automate the PnP process for  deploying 1600 new switches.   Assigning a workflow and template variables to each device through the user interface would be a very time consuming and (potentially) error prone process.

This blog has an example script to automate this, then breaks down each of the API calls in case you would like to further customize.

The following API steps will be shown in detail:

1. Automated discovery of configuration template and variables from an onboarding workflow.
2. Automated addition of a new device to PnP.
3. Automated claiming of a device, assignment of the workflow and population of template variables.

The example scripts used here are all available from this Github repository.

Bulk Device Provisioning – Template Example


In order to bulk configure devices, I need a configuration file (csv in this case).  This file contains

◈ name of the PnP rule
◈ serial number of the device to be onboarded
◈ product ID of the device
◈ workflow name – which contains a configuration template
◈ hostname – a variable used in the configuration template defined in the workflow.

name,serial,pid,workflow,hostname
auto_python0,12345678910,WS-C3850,simpleTemplate,adam0
auto_python1,12345678911,WS-C3850,simpleTemplate,adam1
auto_python2,12345678912,WS-C3850,simpleTemplate,adam2
auto_python3,12345678913,WS-C3850,simpleTemplate,adam3
auto_python4,12345678914,WS-C3850,simpleTemplate,adam4
auto_python5,12345678915,WS-C3850,simpleTemplate,adam5
auto_python6,12345678916,WS-C3850,simpleTemplate,adam6
auto_python7,12345678917,WS-C3850,simpleTemplate,adam7
auto_python8,12345678918,WS-C3850,simpleTemplate,adam8
auto_python9,12345678919,WS-C3850,simpleTemplate,adam9

The csv file is used as input to the script.  In this case it is only 10 devices, but it would be the same process for 1600.

python ./10_add_and_claim.py work_files/bigtest.csv
Using device file: work_files/bigtest.csv
##########################
Device:12345678910 name:auto_python0 workflow:simpleTemplate Status:PLANNED
Device:12345678911 name:auto_python1 workflow:simpleTemplate Status:PLANNED
Device:12345678912 name:auto_python2 workflow:simpleTemplate Status:PLANNED
Device:12345678913 name:auto_python3 workflow:simpleTemplate Status:PLANNED
Device:12345678914 name:auto_python4 workflow:simpleTemplate Status:PLANNED
Device:12345678915 name:auto_python5 workflow:simpleTemplate Status:PLANNED
Device:12345678916 name:auto_python6 workflow:simpleTemplate Status:PLANNED
Device:12345678917 name:auto_python7 workflow:simpleTemplate Status:PLANNED
Device:12345678918 name:auto_python8 workflow:simpleTemplate Status:PLANNED
Device:12345678919 name:auto_python9 workflow:simpleTemplate Status:PLANNED

The DNA Center GUI shows 10 new devices have been added to PnP service, waiting to be onboarded. Once the devices connect to the network, they will contact DNA Center and be onboarded with the intended configuration.

Cisco API, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

The other python scripts in the repository will show the complete configuration template for the device (before it is provisioned) and do a bulk delete of the devices.

Looking at the API details – Workflow


I am going to keep the same steps as the first blog.  The first step it to examine the workflow to find the template and extract the variables from it.

A GET request looks up the workflow  “simpleTemplate” by name.  The workflow has a single step, called “Config Download” and the template has a UUID of  “d0259219-3433-4a52-a933-7096ac0854c3”, which is required for the next step. The workflow id is 5b16465b7a5c2900077b664e . This is required to claim the device.

GET https://dnac/api/v1/onboarding/pnp-workflow?name=simpleTemplate  
[
    {
        "version": 1,
        "name": "simpleTemplate",
        "description": "",
        "useState": "Available",
        "type": "Standard",
        "addedOn": 1528186459377,
        "lastupdateOn": 1528186459377,
        "startTime": 0,
        "endTime": 0,
        "execTime": 0,
        "currTaskIdx": 0,
        "tasks": [
            {
                "taskSeqNo": 0, 
                "name": "Config Download",
                "type": "Config", 
                "startTime": 0,
                "endTime": 0,
                "timeTaken": 0,
                "currWorkItemIdx": 0,
                "configInfo": {
                    "configId": "d0259219-3433-4a52-a933-7096ac0854c3" ,
                    "configFileUrl": null,
                    "fileServiceId": null,
                    "saveToStartUp": true,
                    "connLossRollBack": true,
                    "configParameters": null
                }
            }
  ],
        "addToInventory": true,
        "tenantId": "5afe871e2e1c86008e4692c5",
        "id": "5b16465b7a5c2900077b664e"
    }
]

A GET API call using the template id d0259219-3433-4a52-a933-7096ac0854c3 retrieves the template.  Notice there is a single variable with the name “hostname”, which will be used in the claim process.

Some of the response has been removed for brevity.  Templates will be covered  in more detail in future blogs.

GET https://dnac/api/v1/template-programmer/template/d0259219-3433-4a52-a933-7096ac0854c3

{
    "name": "base config",
    "description": "",
    "tags": [],
    "deviceTypes": [
        {
            "productFamily": "Switches and Hubs"
        }
    ],
    "softwareType": "IOS-XE",
    "softwareVariant": "XE",
    "templateParams": [
        {
            "parameterName": "hostname",
            "dataType": null,
            "defaultValue": null,
            "description": null,
            "required": true,
            "notParam": false,
            "displayName": null,
            "instructionText": null,
            "group": null,
            "order": 1,
            "selection": null,
            "range": [],
            "key": null,
            "provider": null,
            "binding": "",
            "id": "4481c1a4-fcb1-4ee8-ba2f-f24f2d39035b"
        }
    ],
<snip>

Adding a PnP device


The next step is to add the device to PnP database.  The three mandatory attributes are “name”“serialNumber” and “pid”.  The “pid” (productid) is used to determine if the device is capable of stacking, as some workflows have specific steps for stacks.  A POST request sends the attributes in a JSON payload.

The payload takes a list of “deviceInfo”, so multiple devices could be added in a single API call.

POST https://dnac/api/v1/onboarding/pnp-device/import  
[{
"deviceInfo": {
"name": "pnp-test",
"serialNumber": "FDO1732Q00B",
"pid":"ws-c3650",
"sudiRequired": false,
"userSudiSerialNos": [],
"stack": false,
"aaaCredentials": {
"username": "",
"password": ""
}
}
}]

The result has been shortened for brevity.   It was a synchronous API call (no task was returned in contrast to APIC-EM).

The “id” “5b463bde2cc0f40007b126ee” will be required for the claim process.  The id uniquely identifies this PnP device rule.

{
    "successList": [
        {
            "version": 1,
            "deviceInfo": {
                "serialNumber": "FDO1732Q00B",
                "name": "pnp-test",
                "pid": "ws-c3650",
                "lastSyncTime": 0,
                "addedOn": 1531329502838,
                "lastUpdateOn": 1531329502838,
                "firstContact": 0,
                "lastContact": 0,
                "state": "Unclaimed",
                "onbState": "Not Contacted",
                "cmState": "Not Contacted",
                "userSudiSerialNos": [],
                "source": "User",
"id": "5b463bde2cc0f40007b126ee"

Claiming PnP device


The final step is to “claim” the device. This step associates the workflow (including the configuration template and variables) with the device.  The values of  “workflowId”,“configId”(template) and “deviceId” were discovered in earlier steps.

A value for the hostname variable of is required in this case “pnp-test1” is provided.

Again, a list is provided for a given workflow, so multiple devices could be claimed by the same workflow.  Each device can have a unique set of config parameters.

POST https://dnac/api/v1/onboarding/pnp-device/import  
{
  "workflowId": "5b16465b7a5c2900077b664e",
  "deviceClaimList": [
    {
      "configList": [
        {
          "configId": "d0259219-3433-4a52-a933-7096ac0854c3",
          "configParameters": [
            {
              "value": "pnp-test1",
              "key": "hostname"
            }
          ]
        }
      ],
      "deviceId": "5b463bde2cc0f40007b126ee"
    }
  ],
  "populateInventory": true,
  "imageId": null,
  "projectId": null,
  "configId": null
}

The response is synchronous and gives a simple indication of success.

{
    "message": "Device(s) Claimed",
    "statusCode": 200
}

Summary


This blog covered the green API calls below.  Although “un-claimed” workflow was not officially covered, the claim API is the same as pre-provisioned.  The source of the network device differs as the device contacts DNA center before a rule is in place.  This means the /api/v1/onboarding/pnp-device/import API call is not required.

Cisco API, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Using these API you can automate the onboading of tens, hundreds, or even thousands of devices.

Sunday, 22 July 2018

Cisco DNA Center Plug and Play (PnP) – Part 1

Background


I have written a number of blogs on Network Plug and Play (PnP) on APIC-EM and wanted to provide an update of the new improved PnP in Cisco DNA Center.

This new series covers the changes and enhancements made to PnP on Cisco DNA Center 1.2.   The PnP application was not officially exposed in Cisco DNA Center 1.1.x.  The main changes in 1.2 include:

◈ Flexible workflow to onboard devices (vs rigid two step process in the past).
◈ Support for stacking and stack renumbering as part of a workflow.
◈ Reuse of Cisco DNA Center image repository (Part of software image management SWIM) vs standalone APIC-EM image repository.
◈ Reuse of the Cisco DNA Center template engine vs standalone APIC-EM template library.
◈ New API – /api/v1/onboarding.

This initial blog post will cover the UI and workflow changes, and the next blog post will cover the API changes.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

DevNet Zone at Cisco Live US

Key Components


A PnP solution has three main components (and one optional one):

◈ An agent, which resides in the IOS software, that looks for a “Controller” when the device is first booted up.
◈ A PnP Server, which is a service running on Cisco DNA Center
◈ The PnP protocol, that allows the agent and the Controller to communicate.
◈ (optional) A cloud redirect server, for devices that cannot use DHCP or DNS to discover Cisco DNA Center.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

A PnP solution has three main components (and one optional one)

Discovering the Controller


The first thing that needs to happen is for the device to get in contact with the controller. There are four mechanisms you can use to make this work:

◈ DHCP server, using option 43 which is set to the IP Address of the controller.
◈ DHCP server, using a DNS domain name. The device will do a dns lookup of pnphelper.<your domain>
◈ Cloud redirection, which is currently in controlled availability.
◈ USB key. This can be used for routers and remote devices, where some initial configuration of the WAN connection is required (e.g. MPLS configuration).

Getting Started – PnP App


At present PnP is not integrated into the provisioning workflow, this will be done in the future. There is a standalone PnP app in the tools section.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Getting Started – Creating a Workflow


Open the app and the first big change is the definition of a workflow.  In this example, we define a simple workflow that uses a configuration template to provision a new switch.  There is also a default workflow.   Select workflows and “Add workflow” which shows a default workflow which can be edited.    Delete the image task (which will upgrade the IOS on the device) and then select a template for the configuration file as shown in the subsequent step.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

For simplicity, we assume the template has already been created.   There will be another blog series on templates.

NOTE: It is still possible to upload a discrete configuration file per device (not template).   Templates have projects, so a template needs to be created first.   The simple workflow leaves a single step, which will deploy the template called “base config”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Adding a Device


Unlike APIC-EM, there is no concept of project exposed.

There is still an unclaimed or pre-provisioned PnP flow.  The difference is that everything is now “claimed”.  To pre-provision a device,  add it to PnP, then “Add + claim” it.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

When claiming the device, the values for the template variables need to be defined.   In this case the “base config” template requires a single variable called “hostname”.   This variable is set to “pnp-test1”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

This results in a PnP device rule created on DNA Center.   The rule was created by the user, the state is planned (which means the device has not initiated communication yet),  and there has been no contact.  It also specifies the workflow for onboarding “simpleTemplate”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Once these steps are completed, the device is powered on.  It contacts DNA Center and the onboarding process begins.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

The process has completed, the device will be moved to provisioned and added to the inventory.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Although the devices is added to the inventory, under the device provisioning page is appears as  “Not Provisioned”.  This is in reference to the Day-N provisioning which includes the site-settings, templates and policy provisioning.  This workflow will be further integrated in future.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

What Next?


There was still a bit of human activity in provisioning this device.  I needed to create the initial template file, add the device, claim the device and provide values for template variables.  Oh, and I needed to plug the device in and power it on.  All except the last step I could automate.   Imagine you had 1600 switches you wanted to pre-provision with a template!  The next blog post will show how the REST  API can automate this process.

Friday, 20 July 2018

DDoS Mitigation for Modern Peering

DDoS understandably makes for a lot of headlines and is the source of genuine concern on the modern Internet. For example, the recent memcached attacks were remarkable for the volume of traffic generated through amplification (1.7Tbps) and are the latest reminder that a well-designed network edge needs a solid DDoS mitigation solution in place.

Beyond the headlines, the data tells us that while DDoS is an increasing problem, the growth trend is in more complex attacks that target applications not infrastructure.

DDoS Attacks Today


Before getting into the remediation, let’s first look at some basic questions.

1. How much is DDoS growing?
2. What is the nature of the attacks

DDoS Traffic

A Cisco partner, Arbor Networks, publishes an annual report with a lot of great data on security threats. Arbor ATLAS, which monitors over one third of all Internet traffic, shows that the total volume of DDoS attacks has been relatively flat since May 2016, hovering about 0.5 Pbps (Petabytes/second) per month.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

The next two charts show that the number of attacks has been (slightly) on the decline in the last year, but the average attack size has increased.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

Let’s put these impressive numbers into context.

According to Cisco VNI, internet traffic is currently growing at up to 30% CAGR (with many carriers planning for higher growth internally), driven predominantly by video, with up to 80% of the growth coming from content delivery networks and content providers. The VNI report also states that, by 2021, global IP traffic will reach 3.3 ZB per year, or 278 EB per month.

So, while individual DDoS attacks may be growing in size, the Arbor ATLAS and Cisco VNI data tell us that DDoS is growing slower than overall internet traffic.  Additionally: the biggest source of traffic is video from content networks, which are not (usually) origins of DDoS attacks.

This has important implications when it comes to designing a modern peering architecture because it clearly shows peering hardware should optimize for bandwidth growth, with fast reaction capabilities to throttle attacks. For the foreseeable future, peering routers should be as scalable and cost-effective as possible to satisfy these priorities.

Types of DDoS Attacks


Clearly, attackers are using more advanced DDoS tools and launching harder hitting attacks than before. At a high level, there are two types of attacks that our customers encounter in the wild:

1. Volumetric attacks: these attacks are typically very high bandwidth and are designed to disrupt a specific target, be it an application, organization, or an individual, by flooding it with traffic. Volumetric attacks are usually immediately obvious to both the target and upstream connectivity providers.  These attacks typically target infrastructure and seek to overwhelm network capacity.

2. Application layer and state exhaustion attacks: these attacks usually do not consume the raw bandwidth of a volumetric attack, as they have to conform to the protocol the application itself is using, often involving protocol handshakes and protocol/application compliance. This implies that these types of attacks will primarily be launched using intelligent clients. These attacks typically target applications and are designed to overwhelm stateful devices like load-balancers, or the application endpoints themselves.

The DDoS attacks of today are very different from the attacks seen a few years ago. As DDoS defenses have become more effective, attackers have turned to using more advanced attacks, often focusing on application vulnerabilities instead of flooding attacks.  Volumetric attacks are sometimes used to distract security response teams while more sophisticated attacks, such as application and infrastructure penetrations, are attempted.

Detecting and Mitigating Attacks in the Network


For carriers, DDoS solutions have traditionally focused on NetFlow exports collected at peering routers with intelligent analytics systems aggregating and analyzing these flow records (along with other data such as BGP NLRI).  As large volumetric attacks occur without warning, it’s important to quickly identify attack sources and traffic types in order to understand the nature of the attack and plan a response.

In almost all cases, it is possible to mitigate volumetric attacks at the edges of the network using the five-tuple parameters and packet length of the attack, thereby dropping the offending traffic at the peering router.  This avoids having to transport the attack traffic to dedicated cleaning centers. In these scenarios, flow analysis tools can rapidly program the network edge to drop flows using tools such as BGP FlowSpec.

With increasing use of FlowSpec, these NetFlow analytics systems can now move past detection of attack patterns and directly signal dynamic ACLs to drop traffic at line-rate in ingress routers. The main advantage is speed and automation. This allows carriers to quickly determine the nature of an attack and trigger mitigation in the infrastructure from a single interface.

The recent memcached attack is a case in point. It was, in fact, a basic volumetric attack and fairly straightforward to detect with NetFlow and block with traditional five-tuple packet filters deployed using BGP FlowSpec.

In fact, according to our data, NetFlow-based flow analysis has been successful in rapidly detecting all known volumetric type DDoS attacks. Moreover, according to analysis done by Arbor and Cisco, of the approximately 713TB of analyzed DDoS attacks in June 2018, all the volumetric attacks could have been handled with a closed loop of NetFlow analytics and BGP FlowSpec configuration, using automated and, exceptionally, expert-based analysis.

Today’s Attacks are More Complex and Dynamic


But, attackers are motivated and are well aware of the techniques used to mitigate their attacks.  Because carrier-based DDoS controls have traditionally been manually-driven, attackers know that simply changing the pattern of their attack faster than operators can detect and install new filters is the simplest way of avoiding traditional defenses. In some cases, this is as simple as rotating port numbers or other simple patterns in their attacks.

But attackers are also becoming more sophisticated and are now actively fixing their attack tools to avoid easy-to-spot patterns in the packet headers. Many of the signatures used to detect attacks in the past relied on these errors in manually-crafted packets.

Today, DDoS mitigation requires a multi-layered approach: traffic awareness through flow analytics for controlling volumetric attacks that threaten infrastructures, and more sophisticated application layer protections (using protocol “scrubbers”) to address the more complex “low and slow” attacks that now target application endpoints.

Looking for a Magic Bullet


Customers sometimes ask us if it’s possible to detect and respond to application-level attacks using the multi-terabit class ASIC in our Service Provider router portfolio.

Volumetric attacks are usually easy to detect with sampled or hardware-native flow identification: the really large volumes can’t “hide in the noise” the way that the much lower volume application layer attacks can.

Unfortunately, with application level attacks, it’s not so easy. Even if there is a recognizable “signature” to look for, detection would mean ASIC-level flow matching of arbitrary patterns found deep in packets. But the nature of very fast silicon means that these on-chip flow tables are limited in size to thousands or tens of thousands of flows. At peering points, the number of flows to look at is many orders of magnitude higher, rendering this sort of on-chip tracking impractical.

Equally important, high speed packet hardware is optimized to work on network packets that have well-defined and relatively compact headers. Matching longer, more complex, or variable length fields (which are extremely common in filters that look for URLs or DNS record strings) requires a completely different set of silicon/NPU choices that mean that the total bandwidth capability of the device is reduced significantly. Not a good trade-off to make given the huge growth in traffic volumes I discussed earlier.

The pragmatic solution to mitigate complex DDoS attacks without sacrificing the bandwidth necessary to keep up with future traffic growth, is to do packet sampling and push the analysis and collection to external systems that offer a breadth of analytics and scale.  Then, focus first on eliminating the single slowest part of remediation: manual configuration.  Automating mitigations via FlowSpec, driven by intelligent analytics, is today’s best practice for coping with large-scale DDoS attacks.