Thursday, 10 October 2019

Build Indoor Location Services into Your Applications

Show location of people, assets, products in 3-D


Indoor Location Services is a term that we have been hearing a lot for the past couple of years now. Within this space, Cisco has both indoor location and proximity products which we will touch upon as later in the blog. Glance is an indoor wireless location service application based on Cisco Meraki and Cisco DNA (Digital Network Architecture) Spaces for Wireless IP and Bluetooth Low Energy (BLE) devices. There is a “wow-factor” associated with this application as it can show and render location or people and assets in 3-D.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials

Glance lets you see locations or people and assets in 3-D.

Open-source Glance opens up innovation potential


This application has been installed at multiple Cisco innovation centers around the world as well as at Cisco Live events. But there is vast potential for indoor location services enabled applications in a wide variety of wireless-covered areas – such as retail, manufacturing, healthcare, entertainment, public services, etc.

That is why we have open-sourced Glance. Now developers across industries can develop indoor location services applications based on Meraki and Cisco DNA Spaces for IP/BLE wireless devices to serve more end-users in different scenarios and make Glance more powerful.

How to engage and contribute to open-source Glance


You are welcome to download and freely use the codes of the Glance project, as well as Apache license agreement 2.0, and easily setup your own indoor location services with the latest updates. Glance includes basic administrative functions and docker-compose deployment scripts. We also welcome your contribution to Glance project so that it can better serve people under different circumstances.

Let’s look at some of the features that Glance has to offer


Glance supports interactive, 3-D, multi-floor maps with real-time indoor navigation, people/things tracking, and facility finding (such as restrooms, service counters, elevators). The maps also support visualized illustration of objects such as furniture and signs to emulate real surroundings. People-tracking with Glance is a customer-friendly support service which enables customers to find, tag, and show the location of hundreds of people among thousands of people.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials
Glance provides real-time, 3-D heat-map capabilities, which facilitates analysis of people flow, as well as administrative functions for service setup.

Glance Software Stack


Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials
Glance Service structure

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials
Deployment of Glance

Cisco has two products in the indoor location and proximity space


Indoor Location and Proximity with Cisco Meraki:

Meraki provides Real-Time Location Services (RTLS) which enables tracking of live client device location within a network. Cisco Meraki APs can track location of client devices independently, using the signal strength of each client device. This helps to locate client devices that are either stationary or moving inside the intended area. Meraki also has a BLE radio which can scan BLE clients within close proximity.

Cisco DNA Spaces:

Cisco DNA Spaces (previously called Cisco CMX (Connected Mobile Experiences)) provides wireless customers with rich location-based services, including:

◈ location analytics
◈ business insights
◈ customer engagement toolkits
◈ asset management
◈ BLE management
◈ location data APIs

Cisco DevNet has development resources concerning how to code with Cisco CMX solutions, that are now part of Cisco DNA Spaces.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials

Glance has advantages over other positioning apps

◈ Glance offers an easy way to map physical device IDs with person/asset. If wireless network access requires ID-authentication, the administrator can batch-import user/asset profiles (such as Excel sheets) including end user display names & Wi-Fi authentication IDs. The moment the end user’s personal device gets network connection, the Glance back-end will automatically log him/her in. If wireless network access does not require ID-authentication, our customers can check-in and check-out of the system by themselves, so long as the administrator has batch-imported user/asset profiles including the names. Then the end user uses his/her personal device to scan a QR code, access a specified check-in/check-out URL, pick his/her name to complete the check in process or press the “Check out” button to check out. If the end user’s personal device doesn’t have a browser, Glance can also map the physical ID of the device with a person/asset.

◈ Meraki and DNA Spaces use different data models and coordinates. However, Glance has an indoor location service adapter to convert them into one common data model, and map the locations of people/assets to the customized multi-floor map. Therefore, Glance works on top of both Meraki and DNA Spaces.

◈ Converting the physical device IDs in Meraki and DNA Spaces into much more visualized elements rich in properties/tags (such as people, assets, signs, facilities) and easier to categorize/search, Glance’s customized 3-D, multi-floor map emulates the real surroundings and offers a more user-friendly interface.

◈ Third-party services can easily integrate Glance into their location services because Glance provides specified APIs where physical device IDs are replaced with visualized elements. Glance also has integrated third-party services such as WebEx Teams messages and SMS.

Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Study Materials
Glance’s customized 3-D, multi-floor map emulates real surroundings and offers a more user-friendly interface.

Tuesday, 8 October 2019

Using CESA to Solve Endpoint Blindness for a World Class InfoSec Team

Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides

Cisco has an amazing set of products like AMP for Endpoints and Cisco Umbrella protecting devices from advanced malware threats. There were other user and endpoint scenarios that remained unsolved until we introduced the new Cisco Endpoint Security Analytics (CESA) solution that was recently announced. CESA provides an unprecedented level of endpoint and user networking visibility built on Cisco AnyConnect Network Visibility Module (NVM) endpoint telemetry and Splunk Enterprise. Underlying the NVM technology is a protocol called nvzFlow (en-vizzy-flow) that I have blogged about in the past.

Why Did We Build CESA?


The CESA solution was originally developed by the Office of the Security CTO and then integrated into Cisco AnyConnect and Splunk products to solve a set of issues for Cisco InfoSec. Cisco InfoSec realized that getting all the endpoint visibility they needed to perform incident response was a challenge. There were also endpoint security blind spots as more Cisco employees were working off premise and connecting to both enterprise and cloud resources. They needed a way to collect and store a year of data for analysis of incidents while also getting information in real‑time to see what is happening in the network.

The Office of the Security CTO looks at current and future customer problems that are not being solved by existing technology and then come up with ideas on how to solve them. My fellow co-inventors, Andrew Zawadowskiy and Donovan O’Hara from the CTO Advanced Development team built the initial Proof of Concept and then worked on the final product release with the AnyConnect development team.

As we thought about ways to solve the problems Cisco InfoSec was facing, we wanted to do it in a way that built on standards technology so that not only could Cisco Stealtwatch and Cisco Tetration support it, but also provide an ecosystem for key partners to participate. This is why we chose to build on IPFIX. It is the perfect protocol to build the enhanced context found in nvzFlow. What do we mean by “Enhanced Context”?

The 5 key endpoint visibility categories conveyed by the protocol or “Enhanced Context” are:

◈ User
◈ Device
◈ Application
◈ Location
◈ Destination

At the end of the blog will be a helpful table to show you details of the enhanced context that is provided.

Working with Great Partners like Splunk and Samsung


One of the key features of CESA is Splunk Enterprise, which performs the analytics and alerting on the NVM telemetry, turning it into actionable events. The new CESA Built on Splunk product, available exclusively from Cisco, provides a Splunk package customized and priced specifically for analyzing NVM telemetry. Cisco InfoSec has been using the CESA solution for over two years now.

Spunk Enterprise is a fantastic tool. It was really easy for us to take the Cisco AnyConnect NVM data and not only import it into Splunk, but to also quickly create a high value set of dashboards and reports from the data. There are two components in the Splunk store that make up the solution: Cisco AnyConnect Network Visibility Module (NVM) App for Splunk and Cisco NVM Technology Add-on for Splunk. Because NVM produces so much high value data, Splunk created a special per-endpoint license available exclusively from Cisco that makes budgeting predictable and saves you money.

Below is an example of the dozens of reports available in the AnyConnect NVM Splunk Dashboard.
As you can see the solution provides visibility into what applications are connecting to what domains and how much data is being transmitted/received.

Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides

From there, you can then drill down on the specific application and obtain finer grained details including the SHA256 hash of the process, the names of domains and IP addresses it connected to, what account it is running under, etc. Just click on the specific element and it will take you to an investigation page for that observable.

Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides

You can easily integrate your favorite investigation tools right into the Splunk Enterprise dashboards. For example, you can pivot from a DNS domain name observable into Cisco Umbrella, Talos Intelligence or Cisco Threat Response with just a couple lines of HTML. This will allow you to obtain a threat disposition on the domain.

Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides
Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides

Similarly, you can take the SHA256 hash observable and pivot right into AMP for Endpoints, ThreatGrid or Cisco Threat Response. This will allow you to obtain a threat disposition on the binary.

Cisco Study Materials, Cisco Learning, Cisco Tutorial and Materials, Cisco Online Exam, Cisco Guides

We’ve provided those integrations for you in the default dashboards. You can easily add more just by editing them to include your favorite tools. Let us know if there is anything else that would be useful in the default screens.

Samsung has been another excellent partner from the start. We have worked with them closely on their Knox program for a number of years with AnyConnect integrations and neat features like per-app VPN. When we explained to them what we wanted to do with Cisco AnyConnect NVM, they were excited to help and developed the Network Platform Analytics (NPA) framework to make it possible. It is the only framework available on mobile platforms to support Cisco AnyConnect NVM. The best part is that you can enable and provision this capability using your favorite Enterprise Mobility Management (EMM) solution – no special device-mode needed! Keep an eye out for a forthcoming quick‑start guide on this technology. NVM is also available on Windows, MacOS and Linux platforms.

Saturday, 5 October 2019

Configuration Compliance in DCNM 11

We discussed Using DCNM 11 for Easy Provisioning of Networks and VRF’s. Today, we are continuing the discussion by featuring how DCNM empowers compliance of the configurations defined by a user.

Validation of configuration forms an integral part of any Network Controller. Configurations need to be pushed down from the controller to the respective switches as intended by the user. More importantly, configurations need to be in sync and in compliance with the expressed intent at all times. Any deviation from the intended configuration has to be recognized, reported, and remediated – this approach is often described as “closed loop.” In the DCNM LAN Fabric install mode, Configuration Compliance is supported for VXLAN EVPN networks (within Easy Fabrics) as well as traditionally built networks within an External Fabric.

Configuration Compliance is embedded and integrated within the DCNM Fabric builder for all configuration including underlay, overlay, interfaces and every other configuration that is driven through the DCNM policies.

The user typically builds intent for the fabric customizing the various fabric setting options as well a combination of best practice and custom templates. Once the intent is saved and pushed out by DCNM, it periodically monitors what is running in the switches and tracks if there was any Out-of-Band change made in any function of the switch using CLI or another method. If changes are made differing from the applied intent, DCNM will mark the switches as Out-of-Sync indicating a violation in compliance. This warning to the user provides information about the running configuration of the respective switch does not match the intent defined in DCNM. The Out-of-Sync state is indicated by a colour code in the topology view as well as tagged with Out-of-Sync in the tabular view which lists all the switches in a fabric.

Configuration Compliance status with color codes

While the general concept of Configuration Compliance provides a simple colored representation of the state across the nodes, DCNM also generates a side-by-side diff view of the running configuration and expected configuration for each switch.

This diff in configuration is intended to provide the user a full picture of why a particular switch was marked out of compliance aka OUT-OF-SYNC. While at it, Configuration Compliance function provides a set of pending configurations that once pushed to the switch using DCNM, will bring the switch back to compliance aka IN-SYNC. The set of pending configurations are intelligently derived using a model-based approach that is agnostic to commands configured using CLI.

Side-by-side diff generated on Out-of-SYNC

While Configuration Compliance runs periodically, DCNM also provides an on-demand option to “Re-sync” the entire fabric or individual switches to immediately trigger compliance check.

View the demo below to see a walk through of performing configuration compliance in DCNM 11

Friday, 4 October 2019

Using the Cisco DNA Center SDK CLI Tool

API flexibility, with Linux shell CLI convenience


Why CLI, I hear you ask?  CLI tools are really flexible and powerful. The whole concept of the Linux shell is based around a powerful set of tools that can be linked together to perform a task. Rather than needing to build a new tool from scratch (writing code), I can solve my problem linking smaller existing tools together.

Cisco DNA, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Study Materials, Cisco Online Exam

The DNA Center CLI tools provide all the flexibility of the API with the convenience of  Linux shell tools.

Installing


Both the  Cisco DNA Center SDK and CLI are available via PyPI, so all that is required is “pip install”

I would recommend using a virtual environment. This is optional, but means you do not require root access and helps keep different versions of Python libraries separate.   Once created, it needs to be activated, using the “source” command.

If you logout and back in, activation needs to be repeated.

python3 -m venv env3
source env3/bin/activate

To install, you just need to install the cli as dnacentersdk is a dependency.

pip install dnacentercli

You are now able to use the CLI tool.

Getting Started


If you just run the cli tool without any arguments, you will get a help message.  I have truncated for brevity

$ dnacentercli
Usage: dnacentercli [OPTIONS] COMMAND [ARGS]...

  DNA Center API wrapper.

  DNACenterAPI wraps all of the individual DNA Center APIs and represents
  them in a simple hierarchical structure.

Options:
  --help  Show this message and exit.
<SNIP>

You will need to provide credentials and potentially the URL for your DNA Center. In this example I am going to use environment variables to simplify the username and password. I am also going to use the default DNAC, sandboxdnac2.cisco.com. You can change this to use your own DNA Center.

export DNA_CENTER_USERNAME='devnetuser'
export DNA_CENTER_PASSWORD='Cisco123!'
export DNA_CENTER_BASE_URL='https://sandboxdnac2.cisco.com'

# optional needs to be False if self signed certificate
export DNA_CENTER_VERIFY="True"

# optional.  This is the default
export DNA_CENTER_VERSION="1.3.0"

To get a count of the number of network devices, I could use the following:

$ dnacentercli devices get-device-count
{"response": 14,"version": "1.0"}

The structure of the CLI follows the DNA Center SDK. To find out the valid suboptions for devices, simply use –help or run without and suboption. (I have truncated for brevity).

$ dnacentercli devices
Usage: dnacentercli  devices [OPTIONS] COMMAND [ARGS]...

  DNA Center Devices API (version: 1.3.0).

  Wraps the DNA Center Devices API and exposes the API as native Python
  commands.

Options:
  --help  Show this message and exit.

Commands:
  add-device                      Adds the device with given...
  delete-device-by-id             Deletes the network device for...
  export-device-list              Exports the selected network...
  get-all-interfaces              Returns all available...
  get-device-by-id                Returns the network device...
  get-device-by-serial-number     Returns the network device with...
  get-device-config-by-id         Returns the device config by...
  get-device-config-count         Returns the count of device...
  get-device-config-for-all-devices
                                  Returns the config for all...
 <SNIP>

You can also format the JSON output with the -pp option. It requires an argument to indicate the level of indentation. For example (truncated for brevity):

$ dnacentercli devices get-device-list -pp 2
{
  "response": [
    {
      "apManagerInterfaceIp": "",
      "associatedWlcIp": "",
      "bootDateTime": "2019-01-19 02:33:05",
      "collectionInterval": "Global Default",

 <SNIP>

Advanced Use Cases

One common requirement is to get a csv file of the devices in the inventory. The JSON output can be processed with the jq command. jq is a really useful utility that can parse JSON fields. For example, getting a csv file of the DNAC inventory with specific fields.   In the example below, jq is looking at each of the list entries in the response, and extracting the hostname, managementIpAddress, softwareVersion, and id fields.  It then formats them as a csv.  The “-r” option for jq, provides raw output.

$ dnacentercli v1-3-0 devices get-device-list | jq -r '.response[] | [.hostname, .managementIpAddress, .softwareVersion, .id] |@csv'
"3504_WLC","10.10.20.51","8.5.140.0","50c96308-84b5-43dc-ad68-cda146d80290"
"leaf1.labb.local","10.10.20.81","16.6.4a","6a49c827-9b28-490b-8df0-8b6c3b582d8a"
"leaf2.labb.local","10.10.20.82","16.6.4a","d101ef07-b508-4cc9-bfe3-2acf7e8a1015"
"spine1.abc.in","10.10.20.80","16.3.5b","b558bdcc-6835-4420-bfe8-26efa3fcf0b9"
"T1-1","10.10.20.241","8.6.101.0","8cd186fc-f86e-4123-86ed-fe2b2a41e3fc"
"T1-10","10.10.20.250","8.6.101.0","a2168b2d-adef-4589-b3b5-2add5f37daeb"
"T1-2","10.10.20.242","8.6.101.0","0367820f-3aa4-434a-902f-9bd39a8bcd21"
"T1-3","10.10.20.243","8.6.101.0","8336ae01-e1a8-47ea-b0bf-68c83618de9e"
"T1-4","10.10.20.244","8.6.101.0","b65cea84-b0c2-4c44-a2e8-1668460bd876"
"T1-5","10.10.20.245","8.6.101.0","0aafed14-666b-4f9d-a172-6f169798631a"
"T1-6","10.10.20.246","8.6.101.0","e641ce97-dbba-4024-b64c-2f88620bcc23"
"T1-7","10.10.20.247","8.6.101.0","3aaffd4f-0638-4a54-b242-1533e87de9a7"
"T1-8","10.10.20.248","8.6.101.0","a4e0a3ab-de5f-4ee2-822d-a5437b3eaf49"
"T1-9","10.10.20.249","8.6.101.0","10cdbf6d-3672-4b4d-ae75-5b661fa0a5bc"

One big advantage of CLI tools is linking them together with standard Linux tools such as xargs. This provides a way to run a command based on a set of arguments. For example if I wanted to get a dump of all of the configuration files for devices, I would need to call the “get-device-config-by-id” API for each device, giving it an argument of a device UUID (as seen above in the id field). To start with I am just going to get the device ID.

I am choosing switches as the Access Points do not have a configuration (the configuration for AP is stored on the Wireless LAN controller).

$ dnacentercli devices get-device-list --family "Switches and Hubs"| jq -r '.response[] | .id | @text '
6a49c827-9b28-490b-8df0-8b6c3b582d8a
d101ef07-b508-4cc9-bfe3-2acf7e8a1015
b558bdcc-6835-4420-bfe8-26efa3fcf0b9

Next I use the xargs command to run request for the configuration with each of the id as an argument. The cli command is dnacentercli devices get-device-config-by-id –network_device_id

The xargs command creates an instance of get-device-config-by-id for each id.
The output is quite long so I have truncated it.

$ dnacentercli devices get-device-list --family "Switches and Hubs"| jq -r '.response[] | .id | @text ' | xargs -n1 dnacentercli devices get-device-config-by-id --network_device_id
{"response": "\nBuilding configuration...\n\nCurrent configuration : 18816 bytes\n!\n! Last configuration change at 02:02:05 UTC Sat Aug 31 2019 by admin\n!\nversion 16.6\nno service pad\nservice

Cisco DNA, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Study Materials, Cisco Online Exam

This is an example of deleting a device with the ip address 10.10.15.200. Of course i could turn this into a script or a shell macro, and provide the IP address as an argument. I can either run a seperate command to get the status, or i could link it yet again. I am going to use a separate command in this case.

$ dnacentercli devices get-network-device-by-ip --ip_address 10.10.15.200 | jq -r '.response.id | @text ' | xargs dnacentercli devices delete-device-by-id --id
{"response": {"taskId": "cc8b8c29-98cd-4cd4-8c6c-eee96af31057","url": "/api/v1/task/cc8b8c29-98cd-4cd4-8c6c-eee96af31057"},"version": "1.0"}

$ dnacentercli task get-task-by-id --task_id cc8b8c29-98cd-4cd4-8c6c-eee96af31057 -pp 2
{
  "response": {
    "endTime": 1570147128225,
    "id": "cc8b8c29-98cd-4cd4-8c6c-eee96af31057",
    "instanceTenantId": "5d817bf369136f00c74cb23b",
    "isError": false,
    "lastUpdate": 1570147117840,
    "progress": "Network device deleted successfully",
    "rootId": "cc8b8c29-98cd-4cd4-8c6c-eee96af31057",
    "serviceType": "Inventory service",
    "startTime": 1570147117759,
    "version": 1570147117840
  },
  "version": "1.0"
}

The final example is using the CLI to create a site. –type and –area are required arguments.   As this is a POST, we can force the call to run synchronously by passing the __runsync header.  You can see the site was successfully created.

$ dnacentercli sites create-site --type "area" --site '{ "area" : { "name":"Adam","parentName":"Global"}}' --headers '{"__runsync" : true }'
{"result": {"result": {"endTime": 1570132102038,"progress": "Site Creation completed successfully","startTime": 1570132101935}},"siteId": "2ea3f4c2-04e2-4d01-8c12-4459c1e7a2c1","status": "True"}

Thursday, 3 October 2019

Tune in: “Demystifying Cisco Orchestration for Infrastructure as Code”

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Automating the software development life-cycle


DevOps teams are becoming more agile, reducing costs, and delivering a superb customer experience by automating the software development life-cycle. Cisco Orchestration solutions extend the benefits of automation to the entire stack. Each layer of the underlying infrastructure is delivered as Code (IaC). Orchestrators reduce the complexity of programmability, operational state, and visibility. In this session, we decode the differences between domain-specific workflow automation versus cross-domain orchestration. Achieving the goal of ‘Automate everything’ requires the right tool for the right use case. With use cases in mind, we will cover several Cisco orchestration solutions in their respective domains and cross domain capabilities. A brief demo will showcase Open Source and Cisco Orchestration tools working together hand-in-hand.

Level Set


Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

What is Infrastructure as Code (IaC)?


IaC means writing imperative or declaration code to automate programmable infrastructure deployments and manage configurations. Imperative is how you do something step-by-step, as opposed to declarative which is ‘what to do’ by abstracting the configuration and state. DevOps best practices such as source control, verification, and visibility are building blocks to support infrastructure types (compute, network, storage, etc) as code.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Why do we need IaC?


With the advent of Continuous Integration and Continuous Delivery (CI/CD), we are able to build pipelines to automate the entire software development life-cycle (SDLC). Continuous integration (CI) is a set of tools to develop applications. Continuous delivery (CD) is the process of delivering updated software releases to infrastructure environments such as test, stage, and production. Using IaC for these platforms and environments is paramount to enabling software agility and rapid time to value. One could say, IaC is the easy button to building infrastructure to deliver software or other IT services.

Orchestration

What is orchestration? Wikipedia defines orchestration as an automated arrangement, coordination, and management that defines the policies and service levels through automated workflows, provisioning, and change management. In this same vein, a coffee grinder is automation where a brewing machine is orchestration.

Why do we need orchestration in addition to scripting? Production grade IaC at scale requires orchestration versus scripting to deliver advanced features such as intent, policy, governance, and Service Level Agreements. By building IaC to include configuration management, CICD, and other advanced orchestration features, similar benefits to application development are now possible in large scale technology domains (mulit-cloud, containers, campus, WAN, Data Center).

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

With network automation in mind, we see many pit falls with multi-threaded tasks running exclusively from scripts. Step (1) gather facts, Step (2) set conditions, Step (3) loop through items in jinja2 templates and parse to TextFSM and save the data to YAML files. Step (4) Push changes to devices and validate.

This level of scripted multi-threaded workflow is difficult to manage at scale. The main concerns are slow changes, configuration drift, lack of operation state, out of band config overwrites, and disruptive rollbacks. In spite of the current gaps, the Ansible engine is one of my favorite tools for pushing network configurations. One could argue that Tower provides a workflow for the playbooks to manage the order of these tasks but no configuration state is possible. In order to remediate some of these gaps, Ansible engine is adding a new ‘facts’ resource module in a future 2.9 release.

Caution

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Sometimes we automate ourselves into a corner with too many scripts. What happens when major platform changes are made to the scripting tools? We’ve experienced this before with python 2.x to 3.x.and the impact to many of our product SDK libraries. We are seeing it again with the major change to the Ansible engine coming in 2.9 to introduce the ‘facts’ resource module. This change requires users to rewrite their playbooks from scratch to use these features. As a caution, consider limiting scripting to single threaded (CRUD ) actions while shifting the complexities of operational state and rollback to the domain specific orchestration engine.

Domain Orchestration

What is a domain orchestration? A domain orchestration engine focuses on delivering automation targeted to a single technology domain. For instance, the Cisco Network Services Orchestator (NSO) is focused on model driven “network” automation with Netconf and YANG. NSO converts CLI to YANG with network element drivers (NEDS) to supports a multitude of uses cases ranging from stand alone network devices, network services, and multiple controller domains (Meraki, Viptela, and ACI).

In a nut shell, NSO can deploy greenfield or snyc-from brownfield devices to build a transaction based configuration database state. Tools like Ansible engine have modules to integrate with NSO’s northbound JSON API to harness these differentiated capabilities for operations. In the following example, we are using Ansible playbooks with the Ansible NSO/Json module to make CRUD changes to NSO’s configuration database as a means to configure and operate tenants running on a N9K EVPN/VXLAN Data Center network fabrics versus CLI to the stand alone NXOS. The Ansible playbooks are then version controlled as YAML files in a git repository.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Top-level Orchestration

What is Top-level orchestration? A top-level orchestration engine is used to stitch together collaboration, notifications, governance, and source control for other lower level scripting tools and device APIs. Top-level orchestration supports use cases ranging from CICD pipelines for application development to automated infrastructure build and testing. The Cisco Action Orchestrator (AO) is a powerful Top-level orchestrator that enables automated workflows across technology domains and ITSM (ie., ServiceNow). Integrations to ITSM are key for customers who need low or no code catalogs and templates to simplify the delivery of IT services.

Internally, Cisco relies on ITSM and AO to automate the rapid delivery of CiscoLive and Devnet Sandboxes during our customer events. In the below example, the open source tools such as Gitlab work hand-in-hand with AO to create a workflow pipeline to automate the build and test for a tenant configuration across EVPN/VXLAN fabric and SDWAN network domains.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Confusion

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Are you confused by CICD pipelines and their relationship with IaC? My ‘Ah hah’ moment was a realization that many of these DevOps methodologies are not mutually exclusive but highly complementary between AppDev and IaC. As operators, we can support the automated development life-cycle with CICD pipelines. This same knowledge and tools are adaptable to automated infrastructures. Operations can adopt the tools (open source or vendor) that make sense for updating, configuring and management of IaC in many domains.

If you look at AppDev the CICD pipeline for software development must CODE, BUILD, TEST, and DEPLOY the software to an environment that includes infrastructure (compute, network, and storage). Do we build these infrastructure environments ahead of time manually or automated on demand?

If the developer is not willing to patiently wait several weeks for the infrastructure environment to test their CODE, then fully automated IaC is the only answer! A second CICD pipeline managing the configuration, versioning, and alignment of the software build to the environment (test, stage, prod) version allows us to move much quicker and rebuild the environment later if needed.

AppDev CI/CD pipeline to IaC CI/CD pipeline

In the following example, we are using Gitlab to manage an application development CICD pipeline. Upon completion the AppDev pipeline triggers Action Orchestrator to build a second pipeline with workflows to automate the test environment to ultimately test the application stack. The idea is to test the software release in a test environment prior to pushing the same software into production. The Action Orchestrator (AO) has many adapters to make IaC very easy to build and test infrastructure technology domains.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Router/switch software upgrades are another use case for a network specific CICD pipeline. With CICD we can automate the upgrade of specific IOS software versions to devices in a version controlled and tested environment prior to production.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Controllers

Are controllers and orchestration one in the same? NOPE…  Controllers are a single API touch point and management system for Software Defined systems (SD-Access, SD-Wan, and SD-Networks) to manage the configuration state of the underlay and overlay and underlying protocols. Controllers are similar to orchestration by providing access to configuration snapshots and rollbacks, but unable to compose top-level workflows with other tools. In most cases, Controllers are bound to their single technology domain (campus, data center, WAN, or cloud). Often times, IaC is configured adequately with only scripting and source control in a single controller domain. Suffice it to say, when expanding from a single domain to cross domain controllers (ie SDWAN, and SDN) this cross domain integration introduces a catalyst for orchestration.

The Automation Challenge


Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

There is a broad set of technology domains, each with many use cases for IaC. In order to succeed with IaC, we first need to address our automation challenges. From there we can target each specific use case mapped to the appropriate technology domain.

Challenges:

To many touchpoints: Need to consolidate and coordinate tasks using common automation tools.

Complexity: Need to abstract automation as much as possible to make resources consumable for the end users.

Operational Instrumentation: Need to automate and operationalize the tools into workflows that include visual dashboards, role-based access control, and other security services.

Verification: Need to make changes and check changes. With automation, we can move really fast and break things. Hence, we need the proverbial looking over our shoulder versus traditional stare and compare configuration checks. Ideally, verification should start in a test environment through production.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Community and Collaboration: Need to share finished code and avoid recreating the wheel with every workflow.

The key take away for automated solutions is to strive for a sharing culture, agility, simplicity, intent, security, and lower costs.

Technology Domains and Use Cases

The following table depicts the taxonomy of several Cisco orchestration options. As depicted below, the Action Orchestrator is positioned as the glue to bind together the multiple technology domains into a unified workflow.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

What’s Next?


Multi Domain Policy

As our customers continue to strive for end-to-end automation their orchestration workflows are now spanning across multiple technology domains. As these workflows evolve we need to consolidate and coordinate tasks using a common automation platform.

A major step in the “automate everywhere” strategy is to consolidate automation on a Multi-Domain Policy (MDP) platform. Conceptually this upcoming platform is targeted to unify the existing orchestration engines across domains with a consistent UI, catalog, united operations, common segmentation, consistent on-boarding, and delivered on-prem or cloud.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

AI Ops

Logs, telemetry, and health monitoring are currently used to build reactive dashboards for visibility. With the advent of AI Ops, the trend is predictive and self healing operations. AI Ops platforms utilize big data, modern machine learning, and other advanced analytics technologies. This new technology, directly and indirectly, enhances IT operations functions with proactive, personal and dynamic insight. Cisco Intersight is a SaaS addition to the portfolio of domain orchestration engines, making actionable intelligence available from AI Ops in Hyperflex and server domains. AI Ops capabilities are road-mapped into many other orchestration engines as well.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Wednesday, 2 October 2019

Service Providers: The Quest to Attain NFV Enlightenment

The promise of Network Function Virtualisation (NFV) was to lower Total Cost of Ownership (TCO) for the network and to improve service agility (time-to-market). However, like all new technologies, the hype of expectations has subsided over time, and disillusionment has set-in for network Service Providers with false starts on NFV investments.

NFV hype cycle

Cisco Tutorial and Material, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications

Fortunately, with experience comes ‘enlightenment’, and given that NFV has been deployed in live networks for a variety of functions and at various scales for years now, SPs are now able to ascertain the best practice approach and create true business value paired with ‘enhanced productivity’.

Looking forward, NFV platforms will be essential as they serve as the foundation for upcoming architectural shifts; to 5G core, edge computing (MEC) and cloud-native functions.

Best Practice for NFV Platforms


There are various approaches to building an NFV environment, from single-vendor vertical stack to a do-it-yourself (DIY) approach building a platform from various vendor and open-source software.

NFV platform stack approaches

Cisco Tutorial and Material, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications

Over time, Service Providers have discovered that the Vertical approach may not be as capital-efficient, due to its fixed configuration leaving stranded capacity assets. On the other hand, the DIY approach promises horizontal scaling but the complexities of integration and operations creates prohibitive additional costs, a struggle only the largest service providers are equipped to navigate.

Considering both options, the most efficient approach appears to be a combination of both, packaging the platform in modules for life-cycle management, but remaining open to supporting various virtual network functions (VNFs) to support horizontal scaling. This creates a platform that is Open and Modular with the right balance of multi-function scaling, with carrier-grade operational packaging and a single point of ownership.

Requirements of an Open Modular NFV Platform


An NFV platform must create business value; optimized to lower network TCO and increase service agility.

An open modular NFV platform achieves this with:

◈ A scalable architecture for large core to small edge locations, with common orchestration
◈ Network DC SDN that supports bare-metal, VM and container functions
◈ End to end instrumentation for carrier grade operations
◈ Modular life-cycle management for up-grades
◈ The ability to support a wide range of multi-tenanted VNFs with an eco-system of pre-validated VNFs and a replicable process for new VNF on-boarding

Benefits of an Open Modular NFV Platform


Based on keen observations and conversations with customers, the benefits of this approach have been proven. Personally, I have witnessed service agility (time-to-market) improvements of more than 10x, with new services pushed to launch in a matter of days instead of the several months the process used to require.

Replacing a separate vertical NFV stack deployment with a single open modular platform has shown TCO improvement of more than 30%, attributed to operations, power and rack space reductions.

These results are encouraging, with the tangible business benefits indicating that we have attained enlightenment on how NFV platforms should be built for the Service Providers of the future.

Tuesday, 1 October 2019

Threats in encrypted traffic

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides

There was a time when the web was open. Quite literally—communications taking place on the early web were not masked in any significant fashion. This meant that it was fairly trivial for a bad actor to intercept and read the data being transmitted between networked devices.

This was especially troublesome when it came to sensitive data, such as password authentication or credit card transactions. To address the risks of transmitting such data over the web, traffic encryption was invented, ushering in an era of protected communication.

Today more than half of all websites use HTTPS. In fact, according to data obtained from Cisco Cognitive Intelligence, the cloud-based machine learning engine behind Stealthwatch—Cisco’s network traffic analysis solution—82 percent of HTTP/HTTPS traffic is now encrypted.

The adoption of encrypted traffic has been a boon for security and privacy. By leveraging it, users can trust that sensitive transactions and communications are more secure. The downside to this increase in encrypted traffic is that it’s harder to separate the good from the bad. As adoption of encrypted traffic has grown, masking what’s being sent back and forth, it’s become easier for bad actors to hide their malicious activity in such traffic.

A brief history of encrypted traffic


The concerns around security and privacy in web traffic originally led Netscape to introduce the Secure Sockets Layer (SSL) protocol in 1995. After a few releases, the Internet Engineering Task Force (EITF) took over the protocol, which released future updates under then name “Transport Layer Security” (TLS). While the term SSL is often used informally to refer to both today, the SSL protocol has been depreciated and replaced by TLS.

TLS protocol works directly with existing protocols and encrypts the traffic. This is where protocols like HTTPS come from— the hypertext transfer protocol (HTTP) is transmitted over SSL/TLS. While HTTPS is by far the most common protocol secured by TLS, other popular protocols, such as SFTP and SMTPS can take advantage of the protocol. Even lower-level protocols like TCP and UDP can use TLS.

Threat actors follow suit


Attackers go to great pains to get their threats onto systems and networks. The last thing they want after successfully penetrating an organization is to have their traffic picked up by network-monitoring tools. Many threats are now encrypting their traffic to prevent this from happening.

Where standard network monitoring tools might be able to quickly identify and block unencrypted traffic in the past, TLS provides a mask for the communication threats utilize to operate. In fact, according to data taken from Cognitive Intelligence, 63 percent of all threat incidents discovered by Stealthwatch were discovered in encrypted traffic.

In terms of malicious functionality, there are a number of ways that threats use encryption. From command-and-control (C2) communications, to backdoors, to exfiltrating data, attackers consistently use encryption to hide their malicious traffic.

Botnets

By definition, a botnet is a group of Internet-connected, compromised systems. Generally, the systems in a botnet are connected in a client-server or a peer-to-peer configuration. Either way, the malicious actors usually leverage a C2 system to facilitate the passing of instructions to the compromised systems.

Common botnets such as Sality, Necurs, and Gamarue/Andromeda have all leveraged encryption in their C2 communications to remain hidden. The malicious activity carried out by botnets include downloading additional malicious payloads, spread to other systems, perform distributed-denial-of-service (DDoS) attacks, send spam, and other malicious activities.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Botnets mask C2 traffic with encryption.

RATs

The core purpose of a RAT is to allow an attacker to monitor and control a system remotely. Once a RAT manages to implant itself into a system, it needs to phone home for further instructions. RATs require regular or semi-regular connections to the internet, and often use a C2 infrastructure to perform their malicious activities.

RATs often attempt take administrative control of a computer and/or steal information from it, ranging from passwords, to screenshots, to browser histories. It then sends the stolen data back to the attacker.

Most of today’s RATs use encryption in order to mask what is being sent back and forth. Some examples include Orcus RAT, RevengeRat, and some variants of Gh0st RAT.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
RATs use encryption when controlling a computer.

Cryptomining

Cryptocurrency miners establish a TCP connection between the computer it’s running on and a server. In this connection, the computer is regularly receiving work from the server, processing it, then sending it back to the server. Maintaining these connections is critical for cryptomining. Without it the computer would not be able to verify its work.

Given the length of these connections, their importance, and the chance that they can be identified, malicious cryptomining operations often ensure these connections are encrypted.

It’s worth noting that encryption here can apply to any type of cryptomining, both deliberate and malicious in nature. As we covered in our previous Threat of the Month entry on malicious cryptomining, the real difference between these two types of mining is consent.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Miners transfer work back and forth to a server.

Banking trojans

In order for a banking trojan to operate, it has to monitor web traffic on a compromised computer. To do that, some banking trojans siphon web traffic through a malicious proxy or exfiltrate data to a C2 server.

To keep this traffic from being discovered, some banking trojans have taken to encrypting this traffic. For instance, the banking trojan IcedID uses SSL/TLS to send stolen data. Another banking trojan called Vawtrak masks its POST data traffic by using a special encoding scheme that makes it harder to decrypt and identify.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Banking trojans encrypt the data they’re exfiltrating.

Ransomware

The best-known use of encryption in ransomware is obviously when it takes personal files hostage by encrypting them. However, ransomware threats often use encryption in their network communication as well. In particular, some ransomware families encrypt the distribution of decryption keys.

How to spot malicious encrypted traffic


One way to catch malicious encrypted traffic is through a technique called traffic fingerprinting. To leverage this technique, monitor the encrypted packets traveling across your network and look for patterns that match known malicious activity. For instance, the connection to a well-known C2 server can have a distinct pattern, or fingerprint. The same applies to cryptomining traffic or well-known banking trojans.

However, this doesn’t catch all malicious encrypted traffic, since bad actors can simply insert random or dummy packets into their traffic to mask the expected fingerprint. To identify malicious traffic in these cases, other detection techniques are required to identify the traffic, such as machine learning algorithms that can identify more complicated malicious connections. Threats may still manage to evade some machine learning detection methods, so implementing a layered approach, covering a wide variety of techniques, is recommended.

In addition, consider the following:

◈ Stealthwatch includes Encrypted Traffic Analytics. This technology collects network traffic and uses machine learning and behavioral modeling to detect a wide range of malicious encrypted traffic, without any decryption.

◈ The DNS protection technologies included in Cisco Umbrella can prevent connections to malicious domains, stopping threats before they’re even able to establish an encrypted connection.

◈ An effective endpoint protection solution, such as AMP for Endpoints, can also go a long way towards stopping a threat before it starts.