Monday, 30 September 2019

16 Cable Industry Terms You Need to Know to Get Ready for Cable-Tec 2019

SCTE’s Cable-Tec Expo 2019 is just around the corner (September 30th-October 3rd in New Orleans). Plan on a busy week given the wide range of technology on display in the exhibit hall, the 115 papers being presented in the Fall Technical Forum, numerous panel discussions on the Innovation Stage, keynote presentations during the opening General Session and annual awards luncheon, and so much more. If you’re a newcomer to the industry (or new to Cable-Tec Expo), you may find some of the jargon at the conference a bit overwhelming.

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Guides, Cisco Learning, Cisco Study Materials

I’ve defined 16 terms that you need to know before you go to Cable-Tec 2019:

1. Multiple System Operator (MSO) – A corporate entity such as Charter, Comcast, Cox, and others that owns and/or operates more than one cable system. “MSO” is not intended to be a generic abbreviation for all cable companies, even though the abbreviation is commonly misused that way. A local cable system is not an MSO, either – although it might be owned by one – it’s just a cable system. An important point: All MSOs are cable operators, but not all cable operators are MSOs.

2. Hybrid Fiber/Coax (HFC) – A cable network architecture developed in the 1980s that uses a combination of optical fiber and coaxial cable to transport signals to/from subscribers. Prior to what we now call HFC, the cable industry used all-coaxial cable “tree-and-branch” architectures.

3. Wireless – Any service that uses radio waves to transmit/receive video, voice, and/or data in the over-the-air spectrum. Examples of wireless telecommunications technology include cellular (mobile) telephones, two-way radio, and Wi-Fi. Over-the-air broadcast TV and AM & FM radio are forms of wireless communications, too.

4. Wi-Fi 6 – The next generation of Wi-Fi technology, based upon the Institute of Electrical and Electronics Engineers (IEEE) 802.11ax standard (the sixth 802.11 standard, hence the “6” in Wi-Fi 6), that is said to support maximum theoretical data speeds upwards of 10 Gbps.

5. Data-Over-Cable Service Interface Specifications (DOCSIS®) – A family of CableLabs specifications for standardized cable modem-based high-speed data service over cable networks. DOCSIS is intended to ensure interoperability among various manufacturers’ cable modems and related headend equipment. Over the years the industry has seen DOCSIS 1.0, 1.1, 2.0, 3.0 and 3.1 (the latest deployed version), with DOCSIS 4.0 in the works.

6. Gigabit Service – A class of high-speed data service in which the nominal data transmission rate is 1 gigabit per second (Gbps), or 1 billion bits per second. Gigabit service can be asymmetrical (for instance, 1 Gbps in the downstream and a slower speed in the upstream) or symmetrical (1 Gbps in both directions). Cable operators around the world have for the past couple years been deploying DOCSIS 3.1 cable modem technology to support gigabit data service over HFC networks.

7. Full Duplex (FDX) DOCSIS – An extension to the DOCSIS 3.1 specification that supports transmission of downstream and upstream signals on the same frequencies at the same time, targeting data speeds of up to 10 Gbps in the downstream and 5 Gbps in the upstream! The magic of echo cancellation and other technologies allows signals traveling in different directions through the coaxial cable to simultaneously occupy the same frequencies.

8. Extended Spectrum DOCSIS (ESD) – Existing DOCSIS specifications spell out technical parameters for equipment operation on HFC network frequencies from as low as 5 MHz to as high as 1218 MHz (also called 1.2 GHz). Operation on frequencies higher than 1218 MHz is called extended spectrum DOCSIS, with upper frequency limits as high as 1794 MHz (aka 1.8 GHz) to 3 GHz or more! CableLabs is working on DOCSIS 4.0, which will initially spell out metrics for operation up to at least 1.8 GHz.

9. Cable Modem Termination System (CMTS) – An electronic device installed in a cable operator’s headend or hub site that converts digital data to/from the Internet to radio frequency (RF) signals that can be carried on the cable network. A converged cable access platform (CCAP) can be thought of as similar to a CMTS. Examples include Cisco’s uBR-10012 and cBR-8.

10. 5G – According to Wikipedia, “5G is the fifth generation cellular network technology.” You probably already have a smart phone or tablet that is compatible with fourth generation (4G) cellular technology, the latter sometimes called long term evolution (LTE). Service providers are installing new towers in neighborhoods to support 5G, which will provide their subscribers with much faster data speeds. Those towers have to be closer together (which means more of them) because of plans to operate on much higher frequencies than earlier generation technology. So, what does 5G have to do with cable? Plenty! For one thing, the cable industry is well-positioned to partner with telcos to provide “backhaul” interconnections between the new 5G towers and the telcos’ facilities. Those backhauls can be done over some of our fiber, as well as over our HFC networks using DOCSIS.

11. 10G – Not to be confused with 5G, this term refers to the cable industry’s broadband technology platform of the future that will deliver at least 10 gigabits per second to and from the residential premises. 10G supports better security and lower latency, and will take advantage of a variety of technologies such as DOCSIS 3.1, full duplex DOCSIS, wireless, coherent optics, and more.

12. Internet of Things (IoT) – IoT is simply the point in time when more ‘things or objects’ were connected to the Internet than people. Think of interconnecting and managing billions of wired and wireless sensors, embedded systems, appliances, and more. Making it all work, while maintaining privacy and security, and keeping power consumption to a minimum are among the challenges of IoT.

13. Distributed Access Architecture (DAA) – An umbrella term that, according to CableLabs, describes relocating certain functions typically found in a cable network’s headends and hub sites closer to the subscriber. Two primary types of DAA are remote PHY and flexible MAC architecture, described below. Think of the MAC (media access control) as the circuitry where DOCSIS processing takes place, and the PHY (physical layer) as the circuitry where DOCSIS and other RF signals are generated and received.

14. Remote PHY (R-PHY) – A subset of DAA in which a CCAP’s PHY layer electronics are separated from the MAC layer electronics, typically with the PHY electronics located in a separate shelf or optical fiber node. A remote PHY device (RPD) module or circuit is installed in a shelf or node, and the RPD functions as the downstream RF signal transmitter and upstream RF signal receiver. The interconnection between the RPD and the core (such as Cisco’s cBR-8) is via digital fiber, typically 10 Gbps Ethernet.

15. Flexible MAC Architecture (FMA) – Originally called remote MAC/PHY (in which the MAC and PHY electronics are installed in a node), FMA provides more flexibility regarding where MAC layer electronics can be located: headend/hub site, node (with the PHY electronics), or somewhere else.

16. Cloud – I remember seeing a meme online that defined the cloud a bit tongue-in-cheek as “someone else’s computer.” When we say cloud computing, that often means the use of remote computer resources, located in a third-party server facility and accessed via the Internet. Sometimes the server(s) might be in or near one’s own facility. What is called the cloud is used for storing data, computer processing, and for emulating certain functionality in software that previously relied upon dedicated local hardware.

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Guides, Cisco Learning, Cisco Study Materials

There are many more terms and phrases you’ll see and hear at Cable-Tec Expo than can be covered here. If you find something that has you stumped, stop by Cisco’s booth (Booth 1301) and ask one of our experts.

Saturday, 28 September 2019

Cable Service Providers: You Have a New Game to Play. And You Have the Edge

Time to take gaming seriously


Video gaming is huge, by any measure you choose. By revenue, it’s expected to be more than $150 billion in 2019, making it bigger than movies, TV and digital music, with a strong 9% CAGR.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

And it’s not just teenagers. Two-thirds of adult Americans — your paying customers — call themselves gamers.

This makes gaming one of the biggest opportunities you face as a cable provider today. But how can you win those gamers over and generate revenue from them?

New demands on the network


A person’s overall online gaming experience depends on lots of factors, from their client device’s performance through to the GPUs in the data center. But the network — your network — clearly plays a critical role. And gaming related traffic is already growing on your networks. Cisco’s VNI predicts that by 2022, gaming will make up 15% of all internet traffic.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam
When Gamers talk about “lag” affecting their play, saying they care about milliseconds of latency, they really mean overall performance – latency, jitter, drops. Notice how latency changes the gamer’s position (red vs blue) in the below screenshot:

Many would even pay a premium not just for a lower ping time, but also for a more stable one, with less jitter and no dropped packets. Deterministic SLAs become key.

But latency, jitter and drops aren’t the only factors here. Gamers also need tremendous bandwidth, especially for:

◈ Downloading games (and subsequent patches) after the purchase. Many games can exceed 100 GB in size!

◈ Watching others’ video gameplay on YouTube or Twitch. The most popular gamer on YouTube has nearly 35 million subscribers!

◈ Playing the games using cloud gaming services such as the upcoming Google Stadia. 4K cloud gaming could require around 15 GB per hour, twice that of a Netflix 4K movie.

In many cases, the upstream is as important as downstream bandwidth — an opportunity for cable ISPs to differentiate themselves on all those factors with end-to-end innovations.

Your chance to lead


As a cable ISP, you’re the first port of call for gamers looking for a better experience. You can earn their loyalty with enhanced Quality of Experience and even drive new premium service revenue from it.

There’s opportunity for you to be creative, forging new partnerships with gaming providers, hosting gaming servers in your facilities, and even providing premium SLAs for your gaming customers along with new service plans.

But there’s plenty to be done in the network to make these opportunities real.

At the SCTE Expo, we will be discussing specific recommendations for each network domain. To give a teaser, in access network domain, you need to take action to reduce congestion and increase data rate, setting up prioritized service flows for gaming to assure QoS. New technologies like Low Latency DOCSIS (LLD) will be critical for delivering the performance your customers want, optimizing traffic flows and potentially delivering sub-1ms latency without any need to overhaul your HFC network infrastructure itself. In the peering domain, you need to … OK, let’s save that for the live discussion. We will be happy to help on all those fronts.

The cable edge is your competitive edge


Gaming is not the only low-latency use case in town. For example, Mobile Xhaul (including CBRS) and IoT applications depend on ultra-responsive and reliable network connectivity between nodes. And there are plenty of other use cases beyond gaming that are putting new strains on pure capacity, including video and CDNs.

All of these use cases too will benefit from the traffic optimization that LLD enables, but it’s only part of the solution.

IT companies of all shapes and sizes are recognizing that for many of these use cases, putting application compute closer to the customers, at the edge, is the only way forward. 

After all, the best way to reduce latency (and offer better experience) is to cut the route length by hosting application workloads as geographically and topologically close to the customers as possible. This approach also reduces the need for high network bandwidth capacity to the centralized data centers for bandwidth-heavy applications like video and CDN.

Imagine a gaming server colocated in a hub giving local players less than 10ms latency/jitter. Or a machine-vision application that monitors surveillance camera footage for alerts right at the edge, eliminating the need to send the whole livestream back to a central data center. The possibilities are endless.

Expand your hub sites into application cloud edge


In the edge world, your real differentiator becomes the thousands of hub sites that you use to house your CMTS/CCAP, EQAM and other access equipment — sites that SaaS companies and IT startups simply can’t replicate. Far from being a liability to shed and consolidate, this distributed infrastructure is one of your critical advantages. 

By expanding the role of your hub sites into application cloud edge sites, you can increase utilization of your existing infrastructure (for example, cloud-native CCAP), and generate revenue (for example, hosting B2B applications), both by innovating new services of your own and by giving third-party service providers access to geographic proximity to their B2B and B2C users. 

If you’re also a mobile operator, this model also allows you to move many virtualized RAN functions for into your hub sites, leaving a streamlined set of functions on the cell site itself (this edge cloud model is one that Rakuten is using for its 5G-ready rollout, across 4,000 edge sites).

Making cable edge compute happen


We’ve introduced the concept of Cable Edge Compute, describing how you can turn your hubs into next generation application-centric cloud sites to capture this wide-ranging opportunity.

While edge compute architectures do present a number of challenges — from physical space, power & cooling constraints to extra management complexity and new investment in equipments — these are all solvable with the optimal innovations, right design and management approaches. It’s vital to approach an initiative like this with an end-to-end service mindset, looking at topics like assurance, orchestration and scalability from the start.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

Four key ingredients for cable edge compute


Here are essentially four key ingredients for a cost-optimized cable edge compute architecture: 

1. Edge hardware: takes the form of standardized, common SKU modular pods with x86 compute nodes and top-of-rack switches, with optional GPUs and other specialized processing acceleration for specific applications. Modularity, Consistency and flexibility are key here, so as to be able to scale easily. 

2. Software stack: enables the Edge hardware to optimally host a wide range of virtualized applications in containers or VMs or both, whether managed through Kubernetes or Openstack or something else. What’s important is to minimize the x86 CPU cores usage by the software stack and provide deterministic performance. Cisco has made it possible by combining cloud controller nodes with the compute nodes at the edge, but moving storage nodes and management nodes to the remote clusters with specific optimization and security. This optimizes the usage of physical space and power in the Hub site.  

3. Network Fabric: provides ultra-fast connectivity for the application workloads to communicate with each other and consumers. A one- or two-tier programmable network fabric based on 10GE/40GE/100GE/400GE Ethernet transport with end-to-end IPv6 and BGP-based VPN. 

4. And finally, this infrastructure model depends totally on SDN with automation, orchestration and assurance. Configuration and provisioning must be possible remotely via intent files, for example. At this scale, with an architecture this distributed, tasks should be zero-touch across the lifecycle. Assurance is utterly foundational, both to assure appropriate per-application SLAs and the enforcement of policy around prioritization, security and privacy. 

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

Discover your opportunity with low-latency and edge compute


In the new world of low-latency apps delivered through the edge, cable SPs are in a great position.  

And there’s never been a better opportunity to learn more about what this future holds. Cisco CX is presenting at SCTE Cable-Tec Expo on the gaming opportunity and cable edge compute, and we’ve published two new white papers that you can consult as an SCTE member.

Friday, 27 September 2019

Best Practices for Using Cisco Security APIs

Like programming best practices, growing your organization’s use of Application Programming Interfaces (APIs) comes with its own complexities. This blog post will highlight three best practices for ensuring an effective and efficient codebase. Additionally, a few notable examples from Cisco DevNet are provided to whet your appetite.

API best practices


1. Choose an organization-wide coding standard (style guide) – this cannot be stressed enough.

With a codified style guide, you can be sure that whatever scripts or programs are written will be easily followed by any employee at Cisco. The style guide utilized by the Python community at Cisco is Python’s PEP8 standard.

An example of bad vs good coding format can be seen in the following example of dictionary element access. I am using this example because here at Cisco many of our APIs use the JSON (JavaScript Object Notation) format, which, at its most basic, is a multi-level dictionary. Both the “bad” and “good” examples will work; however, the style guide dictates that the get() method should be used instead of the has_key() method for accessing dictionary elements:

Bad:

d = {'hello': 'world'}
if d.has_key('hello'):
    print d['hello']    # prints 'world
else:
    print 'default_value'

Good:

d = {'hello': 'world'}
print d.get('hello', 'default_value') # prints 'world'
print d.get('thingy', 'default_value') # prints 'default_value'
# Or:
if 'hello' in d:
print d['hello']

2. Implement incremental tests – start testing early and test early. For Python, incremental testing takes the form of the “unittest” unit testing framework. Utilizing the framework, code can be broken down into small chunks and tested piece by piece. For the examples below, a potential breakdown of the code into modules could be similar to the following: test network access by pinging the appliance’s management interface, run the API GET method to determine which API versions are accepted by the appliance, and finally parse the output of the API GET method.

In this example I will show the code which implements the unit test framework. The unit test code is written in a separate python script and imports our python API script. For simplicity we will assume that the successful API call (curl -k -X GET –header ‘Accept: application/json’ ‘https://192.168.10.195/api/versions’) always returns the following json response:

{
"supportedVersions":["v3", "latest"]
}

Please review the full example code at the end of this blog post.

3. Document your code – commenting is not documenting. While Python comments can be fairly straight forward in some cases, most, if not all style guides will request the use of a documentation framework. Cisco, for example, uses PubHub. Per the Cisco DevNet team, “PubHub is an online document-creation system that the Cisco DevNet group created” (DevNet, 2019).

Cisco Security APIs, Cisco Study Materials, Cisco Guides, Cisco Tutorials and Materials

DevNet Security


A great introduction to API development with Cisco Security products can be had by attending a DevNet Express Security session. (The next one takes place Oct 8-9 in Madrid.) During the session the APIs for AMP for Endpoints (A4E), ISE, ThreatGrid, Umbrella, and Firepower are explored. A simple way to test your needed API call is to utilize a program called curl prior to entering the command into a python script. For the examples below, we will provide the documentation as well as the curl command needed. Any text within ‘<‘ and ‘>’ indicates that user-specific input is required.

For A4E, one often-used API call lists and provides additional information about each computer. For A4E, a curl-instantiated API call follows this format:
“curl -o output_file.json https://clientID:APIKey@api.amp.cisco.com/v1/function“.

To pull the list of computers, the API call looks like this:
curl -o computers.json https://<client_id>:<api_key>@api.amp.cisco.com/v1/computers.

In ISE a great API call to use queries the monitoring node for all reasons for authentication failure on that monitoring node. To test this API call, we can implement the following curl command:
curl -o output_file.json https://acme123/admin/API/mnt/FailureReasons. The API call’s generic form is as follows: https://ISEmntNode.domain.com/admin/API/mnt/<specific-api-call>.

Please note that you will need to log in to the monitoring node prior to performing the API call.

A common ThreatGrid API call performed is one where a group of file hashes is queried for state (Clean, Malicious, or Unknown). The API call is again visualized as a curl command here for simplicity. In ThreatGrid, the call follows this format:
“curl –request GET “https://panacea.threatgrid.com/api/v2/function?api_key=<API_KEY>&ids=[‘uuid1′,’uuid2’]” –header “Content-Type: application/json”“.

The full request is as follows:
curl –request GET “https://panacea.threatgrid.com/api/v2/samples/state?api_key=<API_KEY>&ids=[‘ba91b01cbfe87c0a71df3b1cd13899a2’]” –header “Content-Type: application/json”.

Umbrella’s Investigate API is the most used of the APIs provided by Umbrella. An interesting API call from Investigate provides us with the categories of all domains provided. As before, curl is used to visualize the API call. In this example, we want to see which categories google.com and yahoo.com fall into:

curl –include –request POST –header “Authorization: Bearer %YourToken%” –data-binary “[\”google.com\”,\”yahoo.com\”]” https://investigate.api.umbrella.com/domains/categorization.

Finally, Firepower’s Threat Defense API. Since deploying changes is required for most all modifications performed to the FTD system, the FTD API’s most useful call deploys configured changes to the system. The curl command needed to initiate a deploy and save the output to a file is:
“curl -o output_file.json -X POST –header ‘Content-Type: application/json’ –header ‘Accept: application/json’ https://ftd.example.com/api/fdm/v3/operational/deploy“.

Open the file after running the command to ensure that the state of the job is “QUEUED”.

Cisco Security APIs, Cisco Study Materials, Cisco Guides, Cisco Tutorials and Materials

Unit Testing Example Code


apiGET.py


#! /usr/bin/python3

"""
apiGET.py is a python script that makes use of the requests module to ask
the target server what API Versions it accepts.
There is only one input (first command line argument) which is an IP address.
The accepted API versions are the output.

To run the script use the following syntax:
./apiGET.py &lt;IP address of server&gt;
python3 apiGET.py &lt;IP address of server&gt;
"""

import requests # to enable native python API handling
import ipaddress # to enable native python IP address handling
import sys # to provide clean exit statuses in python
import os # to allow us to ping quickly within python

def sanitizeInput(inputs):
    """ if there is more than one command line argument, exit """
    if len(inputs) != 2:
        print('Usage: {}  &lt;IP address of API server&gt;'.format(inputs[0]))
        sys.exit(1)

    """ now that there is only one command line argument, make sure it's an IP &amp; return """
    try:
        IPaddr = ipaddress.ip_address(inputs[1])
        return IPaddr
    except ValueError:
        print('address/netmask is invalid: {}'.format(inputs[1]))
        sys.exit(1)
    except:
        print('Usage: {}  &lt;IP address of API server&gt;'.format(inputs[0]))
        sys.exit(1)

def getVersions(IPaddr):
    """ make sure IP exists """
    if (os.system("ping -c 1 -t 1 " + IPaddr) != 0):
        print("Please enter a useable IP address.")
        sys.exit(1)

    """ getting valid versions using the built-in module exceptions to handle errors """
    r = requests.get('https://{}/api/versions'.format(str(IPaddr)), verify=False)

    try:
        r.raise_for_status()
    except:
        return "Unexpected error: " + str(sys.exc_info()[0])

    return r

if __name__ == "__main__":
    print(getVersions(sanitizeInput(sys.argv)).text + '\n')

apiGETtest.py


#! /usr/bin/python3

"""
apiGETtest.py is a python script that is used to perform
unittests against apiGET.py. The purpose of these tests is
to predict different ways users can incorrectly utilize
apiGET.py and ensure the script does not provide
an opportunity for exploitation.
"""

""" importing modules """
import os # python module which handles system calls
import unittest # python module for unit testing
import ipaddress # setting up IP addresses more logically
import requests # to help us unittest our responses below

""" importing what we're testing """
import apiGET # our actual script

""" A class is used to hold all test methods as we create them and fill out our main script """
class TestAPIMethods(unittest.TestCase):
    def setUp(self):
        """
        setting up the test
        """

    def test_too_many_CLI_params(self):
        """
        test that we are cleanly implementing command-line sanitization
        correctly counting
        """
        arguments = ['apiGET.py', '127.0.0.1', 'HELLO_WORLD!']
        with self.assertRaises(SystemExit) as e:
            ipaddress.ip_address(apiGET.sanitizeInput(arguments))
        self.assertEqual(e.exception.code, 1)

    def test_bad_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        eliminating bad input
        """
        arguments = ['apiGET.py', '2540abc']
        with self.assertRaises(SystemExit) as e:
            apiGET.sanitizeInput(arguments)
        self.assertEqual(e.exception.code, 1)

    def test_good_IPv4_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        good for IPv4
        """
        arguments = ['apiGET.py', '127.0.0.1']
        self.assertTrue(ipaddress.ip_address(apiGET.sanitizeInput(arguments)))

    def test_good_IPv6_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        good for IPv6
        """
        arguments = ['apiGET.py', '::1']
        self.assertTrue(ipaddress.ip_address(apiGET.sanitizeInput(arguments)))

    def test_default_api_call_to_bad_IP(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will fail because the IP address 192.168.45.45 does not exist
        """
        with self.assertRaises(SystemExit) as e:
            apiGET.getVersions('192.168.45.45')
        self.assertEqual(e.exception.code, 1)

    def test_default_api_call_to_no_API(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will fail because the IP address does exist but does not have an exposed API
        """
        with self.assertRaises(requests.exceptions.ConnectionError):
            apiGET.getVersions('192.168.10.10')

    def test_default_api_call(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will succeed because the IP address does exist and will respond
        """
        self.assertEqual(apiGET.getVersions('192.168.10.195').text,
            '{\n    "supportedVersions":["v3", "latest"]\n}\n')

if __name__ == '__main__':
    unittest.main()

Thursday, 26 September 2019

The Artificial Intelligence Journey in Contact Centers

I would like to share some thoughts pulled together in discussions with developers, customer care system integrators and experts, along one of the many possible journeys to unleash all the power of Artificial Intelligence (AI) into a modern customer care architecture.

First of all, let me emphasize what the ultimate goal of Artificial Intelligence is:

“Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

AI in the Contact Center


In a Customer Care (CC) environment, one of the primary objectives of any AI component is to support and assist agents in their work and potentially even replace some of them, so we should ask ourselves  how far are we from such a goal today? Let me answer by posting here a recent demo on stage from Google:

Google Duplex: A.I. Assistant Calls Local Businesses To Make Appointments – YouTube

Seen it? While clearly Google Duplex is still in its infancy, I think it’s evident that a much wider application range is behind the corner and we are not too far away since the moment where a virtual agent controlled by an AI engine will be able to replace the real agents in many contexts and with a natural conversation and flow like humans would have with each other.

“Gartner projects that by 2020, 10% of business-to-consumer first level engagement requests will be taken by VCAs… today that is less than 1%….”

Google Duplex Natural Technology


The interesting piece, which leading players such as Cisco (Cognitive Collaboration Solutions) will be able to turn into an advantage for their Contact Center architecture, is related to the way Google Duplex works. It is essentially made of the 3 building blocks:

1. The incoming audio goes into an Automatic Speech Recognition (ASR) able to translate audio into text
2. A Recurrent Neural Network (RNN) formulates a text answer based on the input text
3. Other inputs and finally a Text to Speech (TTS) converts the answer back in audio using speech synthesis technology (Wavenet)

Cisco Tutorials and Materials, Cisco Certifications, Cisco Study Materials, Cisco Online Exam, Cisco Guides

For those of you dealing with customer care it’s rather clear how such an architecture would fit very well into an outbound contact center delivering telemarketing campaigns: this is the way Google is already positioning the technology in combination with their Google assistant.

A part the wonderful capabilities offered by the front and back end text to audio converters, the intelligence of the system is in the Recurrent Neural Network (RNN) able to analyze the input text and context, understand it and formulate a text answer in real time, de facto emulating the complex behavior and processes of human beings.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Study Materials, Cisco Online Exam, Cisco Guides
The most of the CHAT BOTs used today in CC are not even close to this approach as they are doing the dialogue management in a traditional way, managing flows with certain rules, similar to a complex IVR, struggling with the complexity of natural language. Duplex (Tensorflow) or other solutions, such as open sources communities in the world of AI developers (Rasa Core), are adopting neural networks, properly trained and tuned in a specific context, to offer incredibly natural dialogue management. The RNN needs training, which was in the case of Duplex done using a lot of phone conversations data in a specific context as well as features from the audio and the history of the conversation.

CISCO and AI in Contact Centers


Some of the unique approaches explained above could make the customer care solutions of those vendors able to adopt them quite innovative and well perceived, especially when there is an underlying, solid and reliable architectural approach as a foundation. In news out last year, Cisco announced a partnership with Google:

“Give your contact center agents an AI-enhanced assist so they can answer questions quicker and better. …. we are adding Google Artificial Intelligence (AI) to our Cisco Customer Journey Solutions … Contact Center AI is a simple, secure, and flexible solution that allows enterprises with limited machine learning expertise to deploy AI in their contact centers. The AI automatically provides agents with relevant documents to help guide conversations and continuously learns in order to deliver increasingly more relevant information over time. This combination of Google’s powerful AI capabilities with Cisco’s large global reach can dramatically enhance the way companies interact with their customers.”

This whole AI new paradigm demands further thoughts:

1. The most of AI technology is OPEN SOURCE so the value is not in the code itself but more in the way it is made part of a commercial solution to meet customer needs, especially when it is based on an architectural approach, such as the Cisco Cognitive Collaboration..

2. The above point is further driven by the fact that it is difficult to build a general-purpose AI solution, as it will always need customizations according to the specific context of a customer or another. The use cases change and the speed of innovation is probably faster than in the mobile devices’ world, so it is difficult to manage this via a centralized, traditional R&D. This fits more into a community approach made of developers, system integrators and business experts such as the Cisco Ecosystems.

3. Rather then coding AI software, the winning factor is the ability of vendors like Cisco to leverage an environment made of Ecosystem partners, System Integrators and Customer experts to surround and enrich the core Customer Care architecture offerings.

The availability of ecosystem partners, able to package specific CONVERSATIONAL engines, specialized for certain contexts and the role of system integrators, able to combine those engines into the Cisco Customer Care architecture to meet the customer needs, are KEY CISCO competitive advantages in the coming AI revolution.

Wednesday, 25 September 2019

The Circus is Coming to Town and Why You Should Stay Away

We are entering the integrated era


Cisco Tutorials and Materials, Cisco Learning, Cisco Study Materials, Cisco Learning
You’ve probably noticed the recent headlines of a few one-trick ponies getting together to form their own three ring circus.  These events underscore a paradigm shift that is underway – the security world is entering the integrated era.  Nowadays, customers want comprehensive solutions with seamless integrations across their endpoint, cloud and email security programs.  Standalone vendors are just now realizing this and are scrambling to partner up with one another to satisfy the market’s demands.  As an ambassador of Cisco’s integrated security portfolio, I would like to formally address these three vendors by saying: Congratulations – you finally realized what your customers need.  But let me issue a caution: you’re going about it all wrong!

The new reality


A lot of things have fundamentally changed how users work today.  Applications, data, and user identities have moved to the cloud, branch offices connect directly to the internet, and many users work off-network.  This has given users an unprecedented ability to access, create, and share information online, which has concomitantly increased the risk of them exposing sensitive information.  Additionally, advanced attackers have matured beyond the traditional defense models that rely on a patchwork of point solutions – they no longer rely on a single attack technique, and instead use multipronged approaches that may combine email phishing, fileless malware, and malicious websites.

Practitioners must protect against internet-born threats, phishing attacks, have control and visibility into their endpoints, and be able to quickly respond to incidents that arise – that’s a tall order for many reasons.  First, the average enterprise has 75 security tools running in its environment.  Second, most of these tools don’t communicate with one another.  The sheer volume and complexity associated with responding to this information overload while simultaneously trying to correlate disparate datasets across multiple disaggregated sources is daunting. Security teams often find themselves drowning in a deluge of data and facing unmanageable workloads that make it nearly impossible for them to do their jobs well.  This leaves them feeling overwhelmed and unmotivated, and further undermines cyber risk management by increasing the likelihood of them not responding to the threats that matter most fast enough, or missing them altogether.  Additionally, 79% of respondents in Cisco’s 2019 CISO Benchmark Report said it was somewhat or very challenging to orchestrate alerts from multiple vendor products.  To paraphrase, this implies that 79% of the security community does not view ‘Frankensteining’ multiple point products together as a solution to their problems!

Now, don’t get me wrong – I love animals, am an avid fan of the Ringling Brothers, and think that one-trick ponies getting together is abso-friggin-lutely adorable.  But frantically moving from console to console while correlating disparate threat data is a myopic approach that doesn’t solve the underlying problem.  The inconvenient reality is that there always are and always will be threats to respond to, and with attack surfaces continually growing, the problem is only getting more complex.  The only way to stand up to advanced attacks is by taking a highly integrated architectural approach to security.

Successful security integrations require a minimum of these 5 things – everything else will fail sooner or later:


1. Comprehensive coverage – Platforms that cover major threat vectors, like web and email security, span across all endpoints, and integrate with network security tools.

2. Intelligence sharing & automated response – Actionable threat intelligence that is shared amongst all incorporated solutions for data enrichment purposes, so that responses are automated (rather than ‘suggested’) and if a threat is seen once anywhere, it is immediately blocked everywhere.

3. Centralization – Features and capabilities that allow users to consolidate information from multiple solutions on a single pane from which they can dynamically pull observables about unknown threats and kick off investigations.

4. Improved time to remediation (TTR) – Proven ability to significantly reduce TTR to enable SecOps teams to work more quickly and efficiently, thus decreasing likelihood of an incident becoming a full-blown breach.

5. Reliable integration – Integrations that wouldn’t disappear because one company changed their mind regarding their strategic direction or got acquired.

Security that works together for the integrated era

Fortunately, at Cisco, we foresaw this paradigm evolution years ago and invested in building a seamlessly integrated security platform across our SIG, email security, endpoint security, and advanced sandboxing solutions, along with our network security tools like IPS and NGFW.  Backed by Cisco Talos – the largest non-governmental threat intelligence organization on the planet – real-time threat intelligence is shared amongst all incorporated technologies to dynamically automate defense updates so that if a threat is seen once, it is blocked everywhere.  Teams can also kick off threat investigations and respond to incidents from a single console via Cisco Threat Response (CTR), which is a tool that centralizes information to provide unified threat context and response capabilities.  In other words, Cisco’s integrated security portfolio, underscored by Threat Response streamlines all facets of security operations to directly addresses security teams’ most pressing challenges by allowing them to:

◈ Prioritize – SecOps teams can pinpoint threat origins faster and prioritize responding to the riskiest threats in their environment.

◈ Block more threats – Threat Response automates detection and response, across different security tools from a single console, which allows SecOps team to operate more efficiently and avoid burnout.

◈ Save time – Threat intelligence from Talos is shared across all integrated tools, so that you can see a threat once and block it everywhere.

As the largest cybersecurity vendor in the world, only Cisco has the scale, breadth and depth of capabilities to bring all of this together with Threat Response – and best of all, it’s FREE!  Cisco Threat Response is included as an embedded capability with licenses for any tool in Cisco’s integrated security architecture.

Let’s compare the following two scenarios:


Scenario 1 – A patchwork of non-integrated security tools:

Security teams must review alerts from multiple solutions, correlate disparate datasets from various disaggregated sources investigate each threat.  They triage and assign priorities, perform complex tasks with tremendous urgency with the goal of formulating an adequate response strategy based on situational awareness and threat impact, potential scope of compromise, and the criticality of damage that can ensue.  This process is laborious, error-prone, and time-consuming, requiring an analyst to manually swivel through multiple consoles quickly.  We’ve run internal simulations, in which all of this on average takes around 32 minutes.  SOC analysts are left drained and high-severity threats risk being overlooked.

Scenario 2 – Cisco’s integrated security platform:

Security teams see an aggregated set of alerts from multiple Cisco security tools in Threat Response’s interface.  The alerts are automatically compared against intelligence sources, SOC analysts can visualize a threat’s activities across different vectors, kick off an investigation, pinpoint the threats origin, and take corrective actions immediately – all from a single console.  In our internal simulations this took 5 minutes.

Cisco Tutorials and Materials, Cisco Learning, Cisco Study Materials, Cisco Learning

Bottom line: Cisco’s integrated portfolio with Threat Response brings the time it takes to deal with a potential threat down from 32 minutes to 5 minutes, which is 85% faster than the non-integrated patchwork scenario!

Tuesday, 24 September 2019

5 Steps to Creating a Powerful Culture of Video to Impact Your Business, Brand, and Bottom Line

Regardless of whether you’re B2B or B2C, there is a megatrend afoot and the reality is few businesses are ready for it.

Cisco Tutorial and Materials, Cisco Study Materials, Cisco Online Exam, Cisco Certifications

Case in point, the recent “Cisco Visual Networking Index: Forecast and Methodology, 2016–2021” states that video will constitute over 80% of internet traffic by the year 2020.

Think about that for a second.

It’s a truly profound movement.

Furthermore, ask yourself two simple questions:

How much of our company website is video right now?
How often do we integrate video into our sales process?

Both questions, when answered honestly, can be incredibly telling.

But fret not—if your business is struggling with video then it’s highly likely all your competitors are feeling the same pains when it comes to this ever-evolving media.

So, let’s say we accept this trend for what it is and we want to truly establish a culture of video within our organizations. What can we do? Who needs to be involved? And how do we get started?

As an owner of a digital sales and marketing agency, IMPACT, I went to my team a few years ago with a simple challenge. I told them:

“Video is changing the world. Our clients aren’t ready for it. They don’t know how to get started and they certainly don’t know what to do to see a true ROI (return on investment) from video. And let me tell you, outsourcing is not the future of video. They need to learn this in-house, and we’re going to teach it to them.”

My statement wasn’t exactly celebrated at first. Fact is, most agencies “do” video for you. Almost none teach you how to do video, as well as integrate it into your sales and marketing process. With the cost of hiring an agency to do video so high, rarely does a company produce enough videos, consistently, to have a major impact on their bottom line.

Over this time of helping clients establish this culture, we’ve seen clearly there are five essential steps to creating a culture of video within your organization. When any of these are skipped, the company rarely reaches its potential. But when followed properly, the results can be astounding. Here they are:

5 Steps to Creating a Culture of Video In-House and Getting Dramatic Results:


1. Everyone Must Understand the What, How, and Why of Video


Like anything important in business, getting buy-in to video is a must. But this doesn’t happen by sending out a company memo saying, “We’re going to do video.” No, team members must be taught how buyers are changing, why video isn’t going away, and what that means for sales, marketing, customer service, and more.

2. See Video, Primarily, is a SALES Tool


Yes, you did read that correctly. Too often video is cast aside as just another marketing activity that cost a lot while producing little fruit. This should absolutely NOT be the case. But if you approach your entire video strategy with a sales focus, then that will guide you not only on the types of videos that you should produce, but you will now see immediate results from the work you produce.

3. Obsess Over Answering Customer’s Questions with Video


Trying to figure out what your video editorial calendar and priorities are supposed to be? Start with your sales team. Specifically, brainstorm the most common questions they answer on every sales call. If you’re really listening to what your customers are asking, then your video content will resonate, which is why everything from the cost of your products/services to what makes your company truly different needs to be shown, and not just stated, on video. By so doing, you’ll find your prospects are more educated, informed, and certainly further down the buyer’s funnel.

4. Hire an In-House Videographer


Without question, the biggest difference between the companies that are doing GREAT things with video, versus the ones that are not, comes down to their willingness (and faith) to hire an in-house videographer. You may be thinking to yourself, “Our business could never do that. And how would we even keep this type of person busy?” Well, once you start focusing in on the questions your customers are asking, you’ll quickly see there are likely hundreds of videos for you to produce. In fact, at my agency we’ve seen over and over again that when companies produce 100 videos or more a year (assuming they are focused on the right types of videos), their business, brand, and bottom-line explode.

Cisco Tutorial and Materials, Cisco Study Materials, Cisco Online Exam, Cisco Certifications

100 videos? Yes, you read that correctly. Which is also why it’s so much more financially sound to hire someone in-house in lieu of outsourcing. And if you’re wondering, the average compensation we see for a videographer in the marketplace is between 40-60k—way less than the compensation of a typical agency but definitively more fruitful over the course of time.

5. Teach Your Sales Team How to Use Video in the Sales Process


Few things can shorten a sales cycle as well as a powerfully produced video that focuses in on the typical questions and concerns that a buyer has. This being said, most sales people don’t naturally understand how to integrate video content into the sales process. For this reason, it’s imperative you take the time to teach such skills as using 1-to-1 video in email, what types of content to use before and after sales calls, and even how to effectively communicate on camera.

So there you have it. Five proven steps to creating a culture of video within your organization.

Remember, the key starting point here is the acceptance that this critical medium isn’t going away. It’s the rising tide, and each and every one of us can either lose business (and buyer trust) because of an unwillingness to embrace it, or we can accept that buyers have changed and we therefore must “show it.”

Sunday, 22 September 2019

Three Services from 5G: More, Better!

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Learning, Cisco Online Exam, Cisco Study Materials

5G is a capital improvement project the size of the entire planet, replacing architecture created this century with another one that aims to lower energy consumption and maintenance costs while increasing efficiencies. In 2035, 5G is projected to enable $12.3 trillion of global economic output and support 22 million jobs. It’s a gamble, for sure, on the future of transmission technology — contingent upon consumers’ willingness to upgrade.

5G will introduce three new use cases, the most publicized offering users’ theoretical tens of Gbps (reality being more 1 Gbps). In addition to this use case, 5G will allow you to get off the mobile network faster and also provide enhanced access into cloud services like AWS, Azure or GCP. This allows you to take advantage of the network and right into your cloud with such services as mobile office applications, interactive entertainment, and games (think AR/VR).

With all of the speculation and possibilities for 5G and what it really means, we’re taking a stab at showing why it really does matter to Web Scale companies.

Each of the blogs in this series covers the web applications of these new services and how this will help Web players intelligently utilize the virtual 5G core in their clouds or partnering with existing service providers to enable new Web offerings, and potentially more!

Technically the three 5G use cases are viewed in the following graphic are:

◈ Enhanced Mobile Broadband (eMBB): Bandwidth-driven use cases needing high data rates across a mobile wide coverage area.

◈ Ultra-Reliable Low Latency Communications (URLLC): Exacting requirements on latency and reliability for mission critical applications like autonomous vehicle (V2X), remote surgery/healthcare (e.g. Da Vinci surgical system), or time-critical factory automation (e.g. semiconductor fabrication).

◈ Massive Machine Type Communications (mMTC): Providing connectivity to a large number of devices that transmit intermittently a small amount of traffic found in next-gen Internet of Things (IoT) systems.

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Learning, Cisco Online Exam, Cisco Study Materials

Part 1 – eMBB (enhanced Mobile Broadband)

The following features are supported by 5G eMBB use case:

◈ Peak data rate: 10 to 20 Gbps
◈ 100Mbps whenever needed.
◈ 10000 times more traffic
◈ Supports macro and small cells.
◈ Supports high mobility of about 500 Kmph.
◈ It helps in network energy savings by 100 times.

The first marketable application of 5G focuses on eMBB, which provides greater bandwidth supplemented with decreased latency improvement over 4G LTE. This will help to develop today’s mobile broadband services such as emerging AR/VR media and applications (think augmented learning or entertainment including 360-degree streaming, 3D virtual meetings).

But eMBB is not solely about the consumption of multimedia content for entertainment purposes. The uses are vast from supporting a telecommuter (like me) using cloud-based apps while traveling and remote workers located anywhere to communicate back to the office or to an entire “smart” office where all devices are connected seamlessly and wirelessly. Ultimately, it will enable applications from fully immersive VR and AR to real-time video monitoring and 360-degree virtual meetings, real-time interaction, and even real-time translation for participants speaking different languages.

eMBB will also evolve to work across all connected devices, not just smartphones. Video-capable devices we use every day, will drastically increase from the VR and AR goggles that are fast becoming pervasive in our everyday lives to everything offering video services (including IoT Devices).

You can also have dedicated and higher priority connections or bearers in 5G per application for an incredible customer experience, and that capability will be covered in part 2.

Saturday, 21 September 2019

Fundamentals of Cisco DNA Center Plug-and-Play – Day 0 Networking

Background


Network Plug-and-Play allows switches, routers, and wireless access points to be on-boarded to the network. An agent in the device, connects to Cisco DNA center and downloads the required software and device configuration.

In order for this to be truly zero-touch, a network connection is needed. For AP and routers, the initial network connections are reasonably simple. With switches, there a few more options – with vlan, trunking, and port channel options.

I get a lot of questions about the different options and will document the most common ones.

Plug and Play


I am going to assume you are familiar with PnP, and know there is an initial discovery phase, where the device discovers Cisco DNA Center, then a configuration template can be pushed down to the device. All communication is from the device to Cisco DNA Center, which means the source IP address can change on the PnP device. This is significant if you want to change from a DHCP address to static, or even change the IP address/interface that are used for management.

Use Case 1: Trunk Interface, Vlan 1 management, Single Link

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Initial State. After PnP Discovery

This is the simplest use case. It requires DHCP on vlan 1 on the upstream switch. There is nothing really required here. When the PnP switch boots, all interfaces are running Dynamic Trunking Protocol, so a trunk is automatically established. Vlan 1 will have dhcp enabled.

Looking at the trunk status on the pnp device, trunking has been established and vlan 1 is active.

switch#show int g2/0/1 trunk 

Port        Mode             Encapsulation  Status        Native vlan
Gi2/0/1     auto             802.1q         trunking      1

Port        Vlans allowed on trunk
Gi2/0/1     1-4094

Port        Vlans allowed and active in management domain
Gi2/0/1     1

Port        Vlans in spanning tree forwarding state and not pruned
Gi2/0/1     1

The configuration will push a static IP address for vlan 1. Because the dhcp address is changed to static ip, a default route needs to be added. The uplink is being hard coded as a trunk, but this is optional. I have not included any credentials in the configuration as this is done automatically as part of the provisioning.

hostname 3k-stack
int vlan 1
ip address 10.10.1.100 255.255.255.0
ip route 0.0.0.0 0.0.0.0 10.10.1.1

int g2/0/1
switchport mode trunk

The final switch configuration will be as follows:

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Final State, After PnP Provisioning

Use Case 2: Trunk interface, Vlan 15 management, single link


In this case, I want to use vlan15 for management, instead of vlan1. (this could be any vlan number, I just chose 15). This can be achieved in two ways:

◈ I could switchover to vlan 15 in my template
◈ I can use the pnp startup-vlan command in the upstream switch to cause the pnp switch to create vlan 15.

The second case is really useful as it simplifies the deployment. Once I add the “pnp startup-vlan 15” command, any pnp switch will have vlan 15 created and the uplink converted to a trunk with vlan 15 enabled. This process uses CDP under the covers to communicate to the PnP device, and a process on the device creates the vlan and enables DHCP.

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Initial State: PnP Discovery

Looking at the state of the uplink, you can see the vlan 15 is active on the trunk.

Switch#show int g2/0/1 trunk

Port        Mode             Encapsulation  Status        Native vlan
Gi2/0/1     on               802.1q         trunking      1

Port        Vlans allowed on trunk
Gi2/0/1     15

Port        Vlans allowed and active in management domain
Gi2/0/1     15

Port        Vlans in spanning tree forwarding state and not pruned
Gi2/0/1     15

I can then push a configuration to convert the dhcp IP address to static IP.

int vlan 15
ip address 10.10.15.200 255.255.255.0
ip route 0.0.0.0 0.0.0.0 10.10.15.1

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Final State, After PnP Provisioning

Use Case 3: Trunk interface, Vlan 15 management, link aggregation

In this case, there are two links in a bundle. This has been configured in the upstream switch. The same process that creates the management vlan 15, will also create an etherchannel on the PnP device. Only one interface will be added to the bundle.

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Initial State: PnP Discovery

The port channel contains a single member.

switch#show int g2/0/1 ether
Port state    = Up Mstr Assoc In-Bndl 
Channel group = 1           Mode = Active          Gcchange = -
Port-channel  = Po1         GC   =   -             Pseudo port-channel = Po1
Port index    = 0           Load = 0x00            Protocol =   LACP

Flags:  S - Device is sending Slow LACPDUs   F - Device is sending fast LACPDUs.
        A - Device is in active mode.        P - Device is in passive mode.

Local information:
                            LACP port     Admin     Oper    Port        Port
Port      Flags   State     Priority      Key       Key     Number      State
Gi2/0/1   SA      bndl      32768         0x1       0x1     0x202       0x3D  

 Partner's information:

                  LACP port                        Admin  Oper   Port    Port
Port      Flags   Priority  Dev ID          Age    key    Key    Number  State
Gi2/0/1   SA      32768     7c95.f3bd.2a00   4s    0x0    0x1    0x106   0x3D  

Age of the port in the current state: 0d:00h:01m:57s

In this case, all I need to do is configure the other port into the bundle.

int vlan 15
ip address 10.10.15.200 255.255.255.0
ip route 0.0.0.0 0.0.0.0 10.10.15.1
int g2/0/2
switchport trunk allowed vlan 15
 switchport mode trunk
 channel-group 1 mode active

Cisco DNA Central, Cisco Tutorial and Materials, Cisco Learning, Cisco Guides, Cisco Online Exam
Finial State: Post PnP Provisioning

Then the two ports will be in a bundle.

show int port-channel 1 etherchannel 
Port-channel1   (Primary aggregator)

Age of the Port-channel   = 0d:00h:09m:06s
Logical slot/port   = 12/1          Number of ports = 2
HotStandBy port = null 
Port state          = Port-channel Ag-Inuse 
Protocol            =   LACP
Port security       = Disabled

Ports in the Port-channel: 

Index   Load   Port     EC state        No of bits
------+------+------+------------------+-----------
  0     00     Gi2/0/1  Active             0
  0     00     Gi2/0/2  Active             0

Time since last port bundled:    0d:00h:01m:49s    Gi2/0/2
Time since last port Un-bundled: 0d:00h:09m:03s    Gi2/0/1

Management interface switchover


It is also possible to do discovery and deployment via the management interface. On a cat 9k with will be Gig0/0. This interface is in a different VRF, so you need to take that into account. The communication back to DNAC will be via this interface, as will the discovery that takes place one the device is provisioned. If you change over to inband management, you need to change the ‘ip http client source-interface’ command to reflect the new interface. This could be a loopback, or an SVI.

Remember if you switch the source interface, it needs to have a route back to DNAC. This is also the IP address that will be used to add the device to the inventory.

USB bootstrap


The other challenge you may have is no access to DHCP. In this case ISR routers and 9k switches support a USB bootstrap. You can place a configuration file called ‘ciscortr.cfg’ on the root of a usb drive and the switch will execute those commands when it boots. This file needs to contain a way to get ip connectivity and the pnp profile for the device to connect to DNAC. Then the normal PnP process will take over.

vlan 15
int vlan 15
ip address 10.10.15.200 255.255.255.0
ip route 0.0.0.0 0.0.0.0 10.10.15.1
no shut
pnp profile BOOTSTRAP
transport http ipv4 10.10.10.181 port 80