Wednesday, 25 July 2018

How to Use the Plug and Play API in DNA Center – Part 2

Continuing the PnP story…


The first blog in this series gave an overview of network Plug and Play (PnP) and how it has evolved in DNA Center.   It showed a very simple workflow to provision a device with a configuration template with a variable called “hostname”.

One of my customers had a requirement to automate the PnP process for  deploying 1600 new switches.   Assigning a workflow and template variables to each device through the user interface would be a very time consuming and (potentially) error prone process.

This blog has an example script to automate this, then breaks down each of the API calls in case you would like to further customize.

The following API steps will be shown in detail:

1. Automated discovery of configuration template and variables from an onboarding workflow.
2. Automated addition of a new device to PnP.
3. Automated claiming of a device, assignment of the workflow and population of template variables.

The example scripts used here are all available from this Github repository.

Bulk Device Provisioning – Template Example


In order to bulk configure devices, I need a configuration file (csv in this case).  This file contains

◈ name of the PnP rule
◈ serial number of the device to be onboarded
◈ product ID of the device
◈ workflow name – which contains a configuration template
◈ hostname – a variable used in the configuration template defined in the workflow.

name,serial,pid,workflow,hostname
auto_python0,12345678910,WS-C3850,simpleTemplate,adam0
auto_python1,12345678911,WS-C3850,simpleTemplate,adam1
auto_python2,12345678912,WS-C3850,simpleTemplate,adam2
auto_python3,12345678913,WS-C3850,simpleTemplate,adam3
auto_python4,12345678914,WS-C3850,simpleTemplate,adam4
auto_python5,12345678915,WS-C3850,simpleTemplate,adam5
auto_python6,12345678916,WS-C3850,simpleTemplate,adam6
auto_python7,12345678917,WS-C3850,simpleTemplate,adam7
auto_python8,12345678918,WS-C3850,simpleTemplate,adam8
auto_python9,12345678919,WS-C3850,simpleTemplate,adam9

The csv file is used as input to the script.  In this case it is only 10 devices, but it would be the same process for 1600.

python ./10_add_and_claim.py work_files/bigtest.csv
Using device file: work_files/bigtest.csv
##########################
Device:12345678910 name:auto_python0 workflow:simpleTemplate Status:PLANNED
Device:12345678911 name:auto_python1 workflow:simpleTemplate Status:PLANNED
Device:12345678912 name:auto_python2 workflow:simpleTemplate Status:PLANNED
Device:12345678913 name:auto_python3 workflow:simpleTemplate Status:PLANNED
Device:12345678914 name:auto_python4 workflow:simpleTemplate Status:PLANNED
Device:12345678915 name:auto_python5 workflow:simpleTemplate Status:PLANNED
Device:12345678916 name:auto_python6 workflow:simpleTemplate Status:PLANNED
Device:12345678917 name:auto_python7 workflow:simpleTemplate Status:PLANNED
Device:12345678918 name:auto_python8 workflow:simpleTemplate Status:PLANNED
Device:12345678919 name:auto_python9 workflow:simpleTemplate Status:PLANNED

The DNA Center GUI shows 10 new devices have been added to PnP service, waiting to be onboarded. Once the devices connect to the network, they will contact DNA Center and be onboarded with the intended configuration.

Cisco API, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

The other python scripts in the repository will show the complete configuration template for the device (before it is provisioned) and do a bulk delete of the devices.

Looking at the API details – Workflow


I am going to keep the same steps as the first blog.  The first step it to examine the workflow to find the template and extract the variables from it.

A GET request looks up the workflow  “simpleTemplate” by name.  The workflow has a single step, called “Config Download” and the template has a UUID of  “d0259219-3433-4a52-a933-7096ac0854c3”, which is required for the next step. The workflow id is 5b16465b7a5c2900077b664e . This is required to claim the device.

GET https://dnac/api/v1/onboarding/pnp-workflow?name=simpleTemplate  
[
    {
        "version": 1,
        "name": "simpleTemplate",
        "description": "",
        "useState": "Available",
        "type": "Standard",
        "addedOn": 1528186459377,
        "lastupdateOn": 1528186459377,
        "startTime": 0,
        "endTime": 0,
        "execTime": 0,
        "currTaskIdx": 0,
        "tasks": [
            {
                "taskSeqNo": 0, 
                "name": "Config Download",
                "type": "Config", 
                "startTime": 0,
                "endTime": 0,
                "timeTaken": 0,
                "currWorkItemIdx": 0,
                "configInfo": {
                    "configId": "d0259219-3433-4a52-a933-7096ac0854c3" ,
                    "configFileUrl": null,
                    "fileServiceId": null,
                    "saveToStartUp": true,
                    "connLossRollBack": true,
                    "configParameters": null
                }
            }
  ],
        "addToInventory": true,
        "tenantId": "5afe871e2e1c86008e4692c5",
        "id": "5b16465b7a5c2900077b664e"
    }
]

A GET API call using the template id d0259219-3433-4a52-a933-7096ac0854c3 retrieves the template.  Notice there is a single variable with the name “hostname”, which will be used in the claim process.

Some of the response has been removed for brevity.  Templates will be covered  in more detail in future blogs.

GET https://dnac/api/v1/template-programmer/template/d0259219-3433-4a52-a933-7096ac0854c3

{
    "name": "base config",
    "description": "",
    "tags": [],
    "deviceTypes": [
        {
            "productFamily": "Switches and Hubs"
        }
    ],
    "softwareType": "IOS-XE",
    "softwareVariant": "XE",
    "templateParams": [
        {
            "parameterName": "hostname",
            "dataType": null,
            "defaultValue": null,
            "description": null,
            "required": true,
            "notParam": false,
            "displayName": null,
            "instructionText": null,
            "group": null,
            "order": 1,
            "selection": null,
            "range": [],
            "key": null,
            "provider": null,
            "binding": "",
            "id": "4481c1a4-fcb1-4ee8-ba2f-f24f2d39035b"
        }
    ],
<snip>

Adding a PnP device


The next step is to add the device to PnP database.  The three mandatory attributes are “name”“serialNumber” and “pid”.  The “pid” (productid) is used to determine if the device is capable of stacking, as some workflows have specific steps for stacks.  A POST request sends the attributes in a JSON payload.

The payload takes a list of “deviceInfo”, so multiple devices could be added in a single API call.

POST https://dnac/api/v1/onboarding/pnp-device/import  
[{
"deviceInfo": {
"name": "pnp-test",
"serialNumber": "FDO1732Q00B",
"pid":"ws-c3650",
"sudiRequired": false,
"userSudiSerialNos": [],
"stack": false,
"aaaCredentials": {
"username": "",
"password": ""
}
}
}]

The result has been shortened for brevity.   It was a synchronous API call (no task was returned in contrast to APIC-EM).

The “id” “5b463bde2cc0f40007b126ee” will be required for the claim process.  The id uniquely identifies this PnP device rule.

{
    "successList": [
        {
            "version": 1,
            "deviceInfo": {
                "serialNumber": "FDO1732Q00B",
                "name": "pnp-test",
                "pid": "ws-c3650",
                "lastSyncTime": 0,
                "addedOn": 1531329502838,
                "lastUpdateOn": 1531329502838,
                "firstContact": 0,
                "lastContact": 0,
                "state": "Unclaimed",
                "onbState": "Not Contacted",
                "cmState": "Not Contacted",
                "userSudiSerialNos": [],
                "source": "User",
"id": "5b463bde2cc0f40007b126ee"

Claiming PnP device


The final step is to “claim” the device. This step associates the workflow (including the configuration template and variables) with the device.  The values of  “workflowId”,“configId”(template) and “deviceId” were discovered in earlier steps.

A value for the hostname variable of is required in this case “pnp-test1” is provided.

Again, a list is provided for a given workflow, so multiple devices could be claimed by the same workflow.  Each device can have a unique set of config parameters.

POST https://dnac/api/v1/onboarding/pnp-device/import  
{
  "workflowId": "5b16465b7a5c2900077b664e",
  "deviceClaimList": [
    {
      "configList": [
        {
          "configId": "d0259219-3433-4a52-a933-7096ac0854c3",
          "configParameters": [
            {
              "value": "pnp-test1",
              "key": "hostname"
            }
          ]
        }
      ],
      "deviceId": "5b463bde2cc0f40007b126ee"
    }
  ],
  "populateInventory": true,
  "imageId": null,
  "projectId": null,
  "configId": null
}

The response is synchronous and gives a simple indication of success.

{
    "message": "Device(s) Claimed",
    "statusCode": 200
}

Summary


This blog covered the green API calls below.  Although “un-claimed” workflow was not officially covered, the claim API is the same as pre-provisioned.  The source of the network device differs as the device contacts DNA center before a rule is in place.  This means the /api/v1/onboarding/pnp-device/import API call is not required.

Cisco API, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials

Using these API you can automate the onboading of tens, hundreds, or even thousands of devices.

Sunday, 22 July 2018

Cisco DNA Center Plug and Play (PnP) – Part 1

Background


I have written a number of blogs on Network Plug and Play (PnP) on APIC-EM and wanted to provide an update of the new improved PnP in Cisco DNA Center.

This new series covers the changes and enhancements made to PnP on Cisco DNA Center 1.2.   The PnP application was not officially exposed in Cisco DNA Center 1.1.x.  The main changes in 1.2 include:

◈ Flexible workflow to onboard devices (vs rigid two step process in the past).
◈ Support for stacking and stack renumbering as part of a workflow.
◈ Reuse of Cisco DNA Center image repository (Part of software image management SWIM) vs standalone APIC-EM image repository.
◈ Reuse of the Cisco DNA Center template engine vs standalone APIC-EM template library.
◈ New API – /api/v1/onboarding.

This initial blog post will cover the UI and workflow changes, and the next blog post will cover the API changes.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

DevNet Zone at Cisco Live US

Key Components


A PnP solution has three main components (and one optional one):

◈ An agent, which resides in the IOS software, that looks for a “Controller” when the device is first booted up.
◈ A PnP Server, which is a service running on Cisco DNA Center
◈ The PnP protocol, that allows the agent and the Controller to communicate.
◈ (optional) A cloud redirect server, for devices that cannot use DHCP or DNS to discover Cisco DNA Center.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

A PnP solution has three main components (and one optional one)

Discovering the Controller


The first thing that needs to happen is for the device to get in contact with the controller. There are four mechanisms you can use to make this work:

◈ DHCP server, using option 43 which is set to the IP Address of the controller.
◈ DHCP server, using a DNS domain name. The device will do a dns lookup of pnphelper.<your domain>
◈ Cloud redirection, which is currently in controlled availability.
◈ USB key. This can be used for routers and remote devices, where some initial configuration of the WAN connection is required (e.g. MPLS configuration).

Getting Started – PnP App


At present PnP is not integrated into the provisioning workflow, this will be done in the future. There is a standalone PnP app in the tools section.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Getting Started – Creating a Workflow


Open the app and the first big change is the definition of a workflow.  In this example, we define a simple workflow that uses a configuration template to provision a new switch.  There is also a default workflow.   Select workflows and “Add workflow” which shows a default workflow which can be edited.    Delete the image task (which will upgrade the IOS on the device) and then select a template for the configuration file as shown in the subsequent step.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

For simplicity, we assume the template has already been created.   There will be another blog series on templates.

NOTE: It is still possible to upload a discrete configuration file per device (not template).   Templates have projects, so a template needs to be created first.   The simple workflow leaves a single step, which will deploy the template called “base config”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Adding a Device


Unlike APIC-EM, there is no concept of project exposed.

There is still an unclaimed or pre-provisioned PnP flow.  The difference is that everything is now “claimed”.  To pre-provision a device,  add it to PnP, then “Add + claim” it.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

When claiming the device, the values for the template variables need to be defined.   In this case the “base config” template requires a single variable called “hostname”.   This variable is set to “pnp-test1”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

This results in a PnP device rule created on DNA Center.   The rule was created by the user, the state is planned (which means the device has not initiated communication yet),  and there has been no contact.  It also specifies the workflow for onboarding “simpleTemplate”.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Once these steps are completed, the device is powered on.  It contacts DNA Center and the onboarding process begins.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

The process has completed, the device will be moved to provisioned and added to the inventory.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

Although the devices is added to the inventory, under the device provisioning page is appears as  “Not Provisioned”.  This is in reference to the Day-N provisioning which includes the site-settings, templates and policy provisioning.  This workflow will be further integrated in future.

Cisco DNA, Cisco Guides, Cisco Learning, Cisco Study Materials, Cisco Certifications

What Next?


There was still a bit of human activity in provisioning this device.  I needed to create the initial template file, add the device, claim the device and provide values for template variables.  Oh, and I needed to plug the device in and power it on.  All except the last step I could automate.   Imagine you had 1600 switches you wanted to pre-provision with a template!  The next blog post will show how the REST  API can automate this process.

Friday, 20 July 2018

DDoS Mitigation for Modern Peering

DDoS understandably makes for a lot of headlines and is the source of genuine concern on the modern Internet. For example, the recent memcached attacks were remarkable for the volume of traffic generated through amplification (1.7Tbps) and are the latest reminder that a well-designed network edge needs a solid DDoS mitigation solution in place.

Beyond the headlines, the data tells us that while DDoS is an increasing problem, the growth trend is in more complex attacks that target applications not infrastructure.

DDoS Attacks Today


Before getting into the remediation, let’s first look at some basic questions.

1. How much is DDoS growing?
2. What is the nature of the attacks

DDoS Traffic

A Cisco partner, Arbor Networks, publishes an annual report with a lot of great data on security threats. Arbor ATLAS, which monitors over one third of all Internet traffic, shows that the total volume of DDoS attacks has been relatively flat since May 2016, hovering about 0.5 Pbps (Petabytes/second) per month.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

The next two charts show that the number of attacks has been (slightly) on the decline in the last year, but the average attack size has increased.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Certifications, Cisco Tutorials and Materials

Let’s put these impressive numbers into context.

According to Cisco VNI, internet traffic is currently growing at up to 30% CAGR (with many carriers planning for higher growth internally), driven predominantly by video, with up to 80% of the growth coming from content delivery networks and content providers. The VNI report also states that, by 2021, global IP traffic will reach 3.3 ZB per year, or 278 EB per month.

So, while individual DDoS attacks may be growing in size, the Arbor ATLAS and Cisco VNI data tell us that DDoS is growing slower than overall internet traffic.  Additionally: the biggest source of traffic is video from content networks, which are not (usually) origins of DDoS attacks.

This has important implications when it comes to designing a modern peering architecture because it clearly shows peering hardware should optimize for bandwidth growth, with fast reaction capabilities to throttle attacks. For the foreseeable future, peering routers should be as scalable and cost-effective as possible to satisfy these priorities.

Types of DDoS Attacks


Clearly, attackers are using more advanced DDoS tools and launching harder hitting attacks than before. At a high level, there are two types of attacks that our customers encounter in the wild:

1. Volumetric attacks: these attacks are typically very high bandwidth and are designed to disrupt a specific target, be it an application, organization, or an individual, by flooding it with traffic. Volumetric attacks are usually immediately obvious to both the target and upstream connectivity providers.  These attacks typically target infrastructure and seek to overwhelm network capacity.

2. Application layer and state exhaustion attacks: these attacks usually do not consume the raw bandwidth of a volumetric attack, as they have to conform to the protocol the application itself is using, often involving protocol handshakes and protocol/application compliance. This implies that these types of attacks will primarily be launched using intelligent clients. These attacks typically target applications and are designed to overwhelm stateful devices like load-balancers, or the application endpoints themselves.

The DDoS attacks of today are very different from the attacks seen a few years ago. As DDoS defenses have become more effective, attackers have turned to using more advanced attacks, often focusing on application vulnerabilities instead of flooding attacks.  Volumetric attacks are sometimes used to distract security response teams while more sophisticated attacks, such as application and infrastructure penetrations, are attempted.

Detecting and Mitigating Attacks in the Network


For carriers, DDoS solutions have traditionally focused on NetFlow exports collected at peering routers with intelligent analytics systems aggregating and analyzing these flow records (along with other data such as BGP NLRI).  As large volumetric attacks occur without warning, it’s important to quickly identify attack sources and traffic types in order to understand the nature of the attack and plan a response.

In almost all cases, it is possible to mitigate volumetric attacks at the edges of the network using the five-tuple parameters and packet length of the attack, thereby dropping the offending traffic at the peering router.  This avoids having to transport the attack traffic to dedicated cleaning centers. In these scenarios, flow analysis tools can rapidly program the network edge to drop flows using tools such as BGP FlowSpec.

With increasing use of FlowSpec, these NetFlow analytics systems can now move past detection of attack patterns and directly signal dynamic ACLs to drop traffic at line-rate in ingress routers. The main advantage is speed and automation. This allows carriers to quickly determine the nature of an attack and trigger mitigation in the infrastructure from a single interface.

The recent memcached attack is a case in point. It was, in fact, a basic volumetric attack and fairly straightforward to detect with NetFlow and block with traditional five-tuple packet filters deployed using BGP FlowSpec.

In fact, according to our data, NetFlow-based flow analysis has been successful in rapidly detecting all known volumetric type DDoS attacks. Moreover, according to analysis done by Arbor and Cisco, of the approximately 713TB of analyzed DDoS attacks in June 2018, all the volumetric attacks could have been handled with a closed loop of NetFlow analytics and BGP FlowSpec configuration, using automated and, exceptionally, expert-based analysis.

Today’s Attacks are More Complex and Dynamic


But, attackers are motivated and are well aware of the techniques used to mitigate their attacks.  Because carrier-based DDoS controls have traditionally been manually-driven, attackers know that simply changing the pattern of their attack faster than operators can detect and install new filters is the simplest way of avoiding traditional defenses. In some cases, this is as simple as rotating port numbers or other simple patterns in their attacks.

But attackers are also becoming more sophisticated and are now actively fixing their attack tools to avoid easy-to-spot patterns in the packet headers. Many of the signatures used to detect attacks in the past relied on these errors in manually-crafted packets.

Today, DDoS mitigation requires a multi-layered approach: traffic awareness through flow analytics for controlling volumetric attacks that threaten infrastructures, and more sophisticated application layer protections (using protocol “scrubbers”) to address the more complex “low and slow” attacks that now target application endpoints.

Looking for a Magic Bullet


Customers sometimes ask us if it’s possible to detect and respond to application-level attacks using the multi-terabit class ASIC in our Service Provider router portfolio.

Volumetric attacks are usually easy to detect with sampled or hardware-native flow identification: the really large volumes can’t “hide in the noise” the way that the much lower volume application layer attacks can.

Unfortunately, with application level attacks, it’s not so easy. Even if there is a recognizable “signature” to look for, detection would mean ASIC-level flow matching of arbitrary patterns found deep in packets. But the nature of very fast silicon means that these on-chip flow tables are limited in size to thousands or tens of thousands of flows. At peering points, the number of flows to look at is many orders of magnitude higher, rendering this sort of on-chip tracking impractical.

Equally important, high speed packet hardware is optimized to work on network packets that have well-defined and relatively compact headers. Matching longer, more complex, or variable length fields (which are extremely common in filters that look for URLs or DNS record strings) requires a completely different set of silicon/NPU choices that mean that the total bandwidth capability of the device is reduced significantly. Not a good trade-off to make given the huge growth in traffic volumes I discussed earlier.

The pragmatic solution to mitigate complex DDoS attacks without sacrificing the bandwidth necessary to keep up with future traffic growth, is to do packet sampling and push the analysis and collection to external systems that offer a breadth of analytics and scale.  Then, focus first on eliminating the single slowest part of remediation: manual configuration.  Automating mitigations via FlowSpec, driven by intelligent analytics, is today’s best practice for coping with large-scale DDoS attacks.

Wednesday, 18 July 2018

Why Cisco SD-Branch is better than a ‘white box’

A typical branch office IT installation consists of multiple point products, each having a specific function, engineered into a rigid topology. Changing something in that chain, be it adding a new function or connection, increasing bandwidth, or introducing encryption, affects multiple separate products. This introduces risk, increases time to test, and increases roll-out time. If any piece of equipment requires a physical change, the time and personnel costs multiply.  And with that additional roll-out time may delay your business objectives and cause productivity and innovation to suffer.

To help solve these challenges enterprise and service providers are redesigning the branch WAN network to consolidate network services from several dedicated hardware appliance types into virtualized on-demand applications running on software at the branch office with centralized orchestration and management – the Software-Defined Branch (SD-Branch).

Choosing the right hardware platform for these applications to run on is important when deploying a SD-Branch. For the SD-Branch customers have several choices which range from commercial off the shelf PC’s or larger servers – aka ‘white boxes’, to purpose built SD-Branch platforms or even a blade/module inserted into an existing router to add SD-Branch services, all of which can run x86 based applications.  These white boxes may not be the best choice for running or managing networking services since they are mostly a collection of disparate applications loaded onto a device that may not have been built for a branch office environment and lacks sufficient resources for running network services for the branch, and are difficult to integrate together and manage all the elements including; the hardware platform, network services and applications as a whole.

The Cisco SD-Branch solution takes multiple functions previously existing as discrete hardware appliances and instead deploying these as virtual network functions (VNFs) hosted on an x86-based compute platform.  The Cisco SD-Branch delivers physical consolidation, saving space and power and fewer points of failure, and substantially improves IT agility with on-demand services with centralized orchestration and management. Changes can be made quickly, automated, and delivered without truck rolls in minutes for what used to take weeks/months.

Hardware Hosting Platform – Pros and Cons


So how does one assemble a functioning and manageable deployment using white box hardware and various software frameworks without achieving a wobbly stack of uncertainty?  First, the hardware matters.  A SD-Branch hardware platform can be any x86-based server, a server blade that runs inside your existing routing platform, or a purpose built platform that provides options for specialized interfaces for WAN (T1, xDSL, Serial, etc.) and 4G/LTE access.  It should be built for the Enterprise office environment – form factor, acoustics, multi-core capable, WAN/LAN ports with the option to support PoE, etc.  Additionally, data encryption has become a mandatory requirement for providing data privacy and security.

Also when selecting a platform for the SD-Branch it is important to ensure that the performance will scale for the required VNFs and services and is built with enterprise-class components. Second, having an Operating System (OS) or hypervisor that can meet the needs for; security, manageability and orchestration is imperative. For a ‘white box solution’ this can be difficult and can only be achieved through close collaboration between the OS vendor, hardware vendor, the CPU manufacturer and application vendors, and can be problematic since none of these has likely been purpose built or tested for your networking applications.

In terms of physical interfaces, white-boxes typically do not offer features such as Power over Ethernet (PoE). This is highly attractive because many IoT sensors rely on this PoE. In addition, branches often also require WAN interfaces such as 4G LTE, essential not only for backup or load sharing, but also as a transport option for SD-WAN architectures. Also some locations may require legacy TDM links too, so it is important to deploy platforms having the flexibility to support more than simple Ethernet.

Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco Tutorials and Materials

Figure 1 – Table of Pros and Cons

Deploy Cisco SD-Branch Platforms with Confidence


Cisco has developed purpose built hardware platforms for the SD-Branch running an OS and hypervisor (NFVIS) that is custom built for networking services and avoids the pitfalls of a generic x86 based Server or “white box” solution. The NFVIS implementation is designed for high levels of up-time by adopting a hardened Linux kernel and embedding drivers and low-level accelerations that can take advantage of modern CPU features such as Single-Root Input/Output Virtualization (SR-IOV), for plumbing high speed interfaces directly into virtual network functions. Also security is burned-in, simplifying day-zero installations with plug-and-play, and ensuring that only trusted applications and services will boot up and run inside your network.

Cisco Certifications, Cisco Learning, Cisco Study Materials, Cisco Tutorials and Materials

Figure 2 – Cisco UCS E module for ISR 4000 Series and ENCS 5000 Series platforms

Features and advantages of ENCS 5000 Series, ISR 4000 Series with UCS E module and NFVIS are:

◈ Designed for Enterprise deployments and targeted for simplification for networking teams
◈ Optimized for the deployment and monitoring of Virtual Network Functions
◈ On-demand services with; plug and play and zero touch deployment
◈ Secure and trusted infrastructure software
◈ Security tested and certified

Cisco SD-Branch enables agile, on-demand service and centralized orchestration for integrating the new service into the existing ones. Enterprises and service providers gain the ability to choose “best of breed” VNFs to implement a particular service. By using SD-Branch, you can spawn virtual devices to scale to new feature requirements.  For example, deploy the Cisco ENCS 5000 series as a single platform and virtualize of all your SD-Branch services, or with your existing ISR branch router you have an option of inserting a server blade and spawn up a SD-Branch element that provides additional security functionality or running multiple VNFs, service chained together for routing, security, WAN optimization, unified communications, etc.  Similarly, SD-WAN can be deployed as an integral part of the routing VNF with a centrally automated and orchestrated management system.

Cisco’s Digital Network Architecture (DNA) provides the proven and trusted SD-Branch hardware, software and management building blocks to achieve the simplicity and flexibility required by CIOs and IT managers in today’s digital business landscape – here is a whitepaper, which dives deeper into this design guidance

Trusted Cisco Network Services


The Cisco SD-Branch solution offers an open environment for the virtualization of both network functions and applications in the enterprise branch. Both Cisco and third-party VNFs can be on-boarded onto the solution.  Applications running in a Linux or Windows environment can also be instantiated on top of NFVIS and can be supported by DNA Center and the DNA Controller.

Some network functions that Cisco offers in a virtual form factor include:

◈ Cisco Integrated Services Virtual Router (ISRv) for virtual routing
◈ Cisco vEdge Router (vEdge) for virtual SD-WAN routing
◈ Cisco Adaptive Security Virtual Appliance (ASAv) for a virtual firewall
◈ Cisco Firepower™Next-GenerationFirewall Virtual (NGFWv) for integrated firewall and intrusion detection and prevention (IPS and IDS)
◈ Cisco Virtual Wide Area Application Services (vWAAS) for virtualized WAN optimization
◈ Cisco Virtual Wireless Controller (vWLC) for a virtualized wireless LAN controller

Third Party Open Ecosystem


Cisco’s open ecosystem approach for the SD-Branch allows other vendors to submit their VNFs for certification to help ensure compatibility and interoperability with the Cisco SD-Branch infrastructure. As a customer deploying Cisco’s SD-Branch solution with certified VNFs, you can be confident that the solution will successfully deploy, run, and interoperate with Cisco’s own suite of VNFs.

Some currently certified vendors and VNFs include:

◈ ThousandEyes – network intelligence platform
◈ Fortinet – FortiGate next generation firewall
◈ Palo Alto Networks – Next generation firewall
◈ Citrix Netscaler VPX – Application delivery controller (ADC)
◈ InfoVista Ipanema – SDWAN
◈ Ctera – Enterprise NAS/file services

Many more third party VNFs are now under test for certification.

Sunday, 15 July 2018

The Future of Utilities: A Partner Opportunity

Energy and specifically electrically powered devices and appliances are part of everyday life for all of us. However, the dependency on personal computing devices like phones, tablets, headsets and the likes has made us realize how important energy savings and battery efficiencies are! Who hasn’t experienced anxiety or even panic as your mobile phone is running out of battery and you forgot your power adapter and/or there is no charging point in sight! What would you be willing to give up at that very moment just for some power!  Making sure that the electrical supply is constant and reliable is the main objective of every utility company in the world. Helping these companies achieve these goals is a substantial opportunity for Cisco and our partners more than ever before.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certifications

The Utilities market, just like every other industry, is going through major changes driven by new technology paradigms such as the Internet of Things as well as the convergence of Operational Technologies (OT), which enable an efficient management of the power grid and Information Technologies (IT), which are becoming a critical part of the main Line-Of-Business objectives. The Digital Transformation of utilities companies is the direct result of many mega-trends; here are some of the most important ones:

Grid Resilience and Reliability


◈ Electricity supply is a matter of national security for every country, but maintaining its reliability is a challenge given the amount of transmission lines deployed globally.

◈ “In the U.S. (alone) … 200,000 miles of high-voltage transmission lines and 5 million milesof local distribution lines… (connect) thousands of generating plants to factories, homes and businesses“

New Technology Disruptions:


◈ The relationship between electricity consumers and suppliers has changed with the connection of home appliances to the internet, allowing the end user to manage their power consumption in real-time.

◈ “Sales of connected appliances will surge … from 30 million units sold in 2016 to 178 million units in 2020″

Aging Workforce:


◈ The US Department of Energy estimates that 25% of employees in electric and gas utilities will be retiring within the next 4 years.

◈ In the United Kingdom 52% of the power sector workforce is over 45 years old.

Most utilities have been used to treat their clients as simple ratepayers for many years; however, the balance in this relationship is now changing thanks to technology, distributed energy resources and social media. There are many positive business outcomes when Utilities take a proactive approach by engaging their customers in their own terms.

Specifically, in the case of Distributed Energy Resources (which refer to renewable energy generation such as wind, solar or wave power which are now becoming more accessible for residential and business consumers) the paradigm is shifting as the prices of battery power storage is coming down, leading to what is known as the Internet of Energy.

All these changes are creating a perfect storm of challenges, prompting utilities to become ready for cybersecurity and physical security standards, to competing for vision, mind share and leadership, as well as to address the Digital Disruption that all these events are creating. Nevertheless, is this really a Disruption or an Opportunity?

Cisco leads Utility Digitization with the GridBlocks™ Architecture, which provides a holistic view of the communications requirements needed, however this is not merely about devices and pure connectivity. It´s about providing a reference architecture for our partners that enables them to assess the overall digitization requirements of their utility clients and apply IP protocols in new and innovative ways to achieve performance levels, which were impossible to achieve in the past.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certifications

The Utility industry knows that providing reliable and secure connectivity across multi-million intelligent energy devices on the grid and on premise is needed to manage today’s electric system and market operations into the future.

Friday, 13 July 2018

Turbocharge Your Next Webex Teams Proof of Concept and Demo Development

Use this Script to Turbocharge Webex Teams Bot Development


When I first started developing Webex Teams bots, I immediately was drawn to ngrok for its simplicity and power. It allowed me to rapidly start prototyping my bot and to do quick demos. It was simple to setup up very quickly and has a great Client API set that allowed me to dig into the details if I needed to troubleshoot.

Because of the ephemeral nature of the ngrok tunnels, though, it is somewhat of a nuisance to develop your bots because every time you tear down an ngrok tunnel and build it back up at a later time, you end up with a different URL for the webhook. If you’ve prototyped or demo’d a Webex Teams bot before, then you know that you then have to update the webhook with the new URL. This means going to the developer site and modifying the webhook by hand. The process goes somewhat like this:

1. Bring up an ngrok tunnel
2. Go to the Webex Teams website
3. Update your webhook to the new URL that ngrok just gave you.
4. Run the demo
5. Shut down your demo
6. Rinse, lather and repeat every time you need to bring up the tunnel!

The same basic process is applies when beginning to prototype your bot. Bring up a tunnel, update the webhooks, develop/test, tear down the tunnel, rinse and repeat. Don’t forget that you need to use your bot’s token rather than your developer token for #3 above. Plus you need to make sure that you don’t make any copy/paste mistakes, etc. Yucky, manual work!

Fortunately we can mashup the ngrok Client and Webex Teams API’s to do this in a more elegant and automated fashion.

Solution


So the process for automating this is relatively simple, so let’s dive in. Typically it is as simple as this:

First, bring up the tunnel using ngrok:

./ngrok http 80

You will end up with something along these line as output in the terminal.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications
ngrok startup output showing status and your new web hook URL.

We then run the ngrok-startup.py with two arguments. The first is the port you want the tunnel to be listening on and the second is the name of the new tunnel.

python3 ngrok_startup.py 443 "super_awesome_demo_bot"

which will result in a series of status messages describing what the script is doing:

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications
Expected status messages from ngrok-startup.py.

And we are done. The script used the ngrok Client API to tear down the existing tunnels, create new ones and then update your bot’s webhook. Now you able to iterate your bot PoC and then demo it live without going through all those manual steps a bunch of times.

The Code


I know a wall of text isn’t very interesting, but you may not be familiar with these specific APIs. So I’m going to walk through the five core functions.

So let’s take a looksie at some interesting API nuggets. Since ngrok automatically creates a set of tunnels at startup, in order to start with a clean slate, we will tear those down and create a new set.

The ngrok client API couldn’t be easier to use. We start by first getting a list of the open tunnels.

def get_tunnels_list(ngrok_base_url):
    print("get_tunnels_list start")
    error = ""
    active_tunnels = list()
    print(" Getting the list of tunnels...")
    tunnel_list_url = ngrok_base_url + tunnels_api_uri
    r = requests.get(tunnel_list_url, verify=False)
    print(" ...Received List of Tunnels...")

    # get the json object from the response
    json_object = json.loads(r.text)

    tunnels = json_object['tunnels']

    if r.status_code==200:
        for potential_tunnel in tunnels:
            active_tunnels.append(potential_tunnel)
    else:
        error=" Unable to list of tunnels"

    print("get_tunnels_list end")

    return active_tunnels,error

As you can see above, we send an http GET request to the local ngrok client. If successful, we get a list of the currently open tunnels.

Next we delete all the tunnels on the list. There should only be two, but we iterate through the entire list we get anyways.

def delete_active_tunnels(tunnel_list, ngrok_base_url):
    print("delete_active_tunnels start")
    errors=list()
    tunnel_delete_base_url = ngrok_base_url + tunnel_delete_uri

    print(" beginning delete of tunnels...")
    for tunnel_to_delete in my_active_tunnels:
        tunnel_name = tunnel_to_delete['name']
        tunnel_delete_complete_url = tunnel_delete_base_url + tunnel_name

        delete_request = requests.delete(tunnel_delete_complete_url, verify=False)
        if delete_request.status_code != 204:
            errors.append("Error Deleting tunnel: {}".format(tunnel_name))

    print(" ...ending delete of tunnels...")
    print("delete_active_tunnels end\n")

    return errors

Again, pretty self explanatory. We take the list of tunnels we received from the previous code snippet and delete each tunnel with an HTTP DELETE request.

Next we create a new tunnel, using the tunnel name provided in the second argument of the ngrok_startup.py command.

def public_tunnel_for_name(tunnel_name, tunnel_port, ngrok_base_url):
    print("public_tunnel_for_name start")
    errors=list()
    public_tunnel = ()
    create_tunnel_url = ngrok_base_url + tunnels_api_uri

    # make sure you change the port!!"
    print(" creating new tunnel...")
    tunnel_json = { 'addr' : tunnel_port, 'proto' : 'http', 'name' : tunnel_name}
    create_tunnel_response = requests.post(create_tunnel_url,json=tunnel_json,verify=False)
    if create_tunnel_response.status_code != 201:
        errors.append("Error creating tunnel: {}".format(create_tunnel_response.status_code))
    else:
        jsonObject = json.loads(create_tunnel_response.text)
        public_tunnel = (jsonObject['public_url'],jsonObject['uri'])

    print(" ...done creating new tunnel")
    print("public_tunnel_for_name end\n")

    return public_tunnel,errors

To create the tunnel, we just send an HTTP POST request to the ngrok client with a JSON snippet containing the port, the protocol, and a name for the tunnel. If all goes well, the ngrok client sends back a JSON payload with a new URL that your bot can use as its new web hook.

With the new tunnel URL in hand we can start working with the Webex Teams Webhook API’s. It’s important to note that you need to have your bot’s authorization token in the headers of all your Webex Teams API requests. In the script, this and other variables are set via environmental variables and stored in a python dictionary as follows:

dev_token = os.environ.get('SPARK_DEV_TOKEN')
webhook_request_headers = {
    "Accept" : "application/json",
    "Content-Type":"application/json",
    "Authorization": "Bearer {}".format(dev_token)
}

The first thing we do is delete the existing webhook.

def delete_prexisting_webhooks():
    print("delete_prexisting_webhooks start")
    errors=list()

    print(" deleting existing webhook...")
    webhooks_list_response =requests.get(webhook_base_url,headers=webhook_request_headers, verify=False)

    if webhooks_list_response.status_code != 200:
        errors.append("Error getting list of webhooks:  {}".format(webhooks_list_response.status_code))

    else:
        webhooks = json.loads(webhooks_list_response.text)['items']

        if len(webhooks) &gt; 0:

            for webhook in webhooks:
                delete_webhook_url = webhook_base_url + '/' + webhook['id']
                delete_webhook_response = requests.delete(delete_webhook_url,headers=webhook_request_headers)
                if delete_webhook_response.status_code != 204:
                    errors.append("Delete Webhook Error code:  {}".format(delete_webhook_response.status_code))
    print(" ...Deleted existing webhooks")
    print("delete_prexisting_webhooks end\n")
    return errors

As you can see from the code block, first we need to get a list of webhooks and then iterate through the list. Sending HTTP DELETE requests as we go. This could be somewhat problematic if you have multiple webhooks for the same bot. But we are only using this script to help us automate our basic PoC/demo bot where we would probably only have a single webhook firing.

Finally, we create the new webhook. Using the super handy Webex Teams API’s we can easily create a new webhook.

def update_webhook(webhook_request_json):
print("update_webhook start")

    webhook_creation_response = requests.post(webhook_base_url, json=webhook_request_json,
                                              headers=webhook_request_headers)
    if webhook_creation_response.status_code == 200:
        print(' Webhook creation for new tunnel successful!')
    else:
        print(' Webhook creation for new tunnel was not successful.  Status Code: {}'.format(
            webhook_creation_response.status_code))

    print("update_webhook end\n")