Thursday 3 October 2019

Tune in: “Demystifying Cisco Orchestration for Infrastructure as Code”

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Automating the software development life-cycle


DevOps teams are becoming more agile, reducing costs, and delivering a superb customer experience by automating the software development life-cycle. Cisco Orchestration solutions extend the benefits of automation to the entire stack. Each layer of the underlying infrastructure is delivered as Code (IaC). Orchestrators reduce the complexity of programmability, operational state, and visibility. In this session, we decode the differences between domain-specific workflow automation versus cross-domain orchestration. Achieving the goal of ‘Automate everything’ requires the right tool for the right use case. With use cases in mind, we will cover several Cisco orchestration solutions in their respective domains and cross domain capabilities. A brief demo will showcase Open Source and Cisco Orchestration tools working together hand-in-hand.

Level Set


Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

What is Infrastructure as Code (IaC)?


IaC means writing imperative or declaration code to automate programmable infrastructure deployments and manage configurations. Imperative is how you do something step-by-step, as opposed to declarative which is ‘what to do’ by abstracting the configuration and state. DevOps best practices such as source control, verification, and visibility are building blocks to support infrastructure types (compute, network, storage, etc) as code.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Why do we need IaC?


With the advent of Continuous Integration and Continuous Delivery (CI/CD), we are able to build pipelines to automate the entire software development life-cycle (SDLC). Continuous integration (CI) is a set of tools to develop applications. Continuous delivery (CD) is the process of delivering updated software releases to infrastructure environments such as test, stage, and production. Using IaC for these platforms and environments is paramount to enabling software agility and rapid time to value. One could say, IaC is the easy button to building infrastructure to deliver software or other IT services.

Orchestration

What is orchestration? Wikipedia defines orchestration as an automated arrangement, coordination, and management that defines the policies and service levels through automated workflows, provisioning, and change management. In this same vein, a coffee grinder is automation where a brewing machine is orchestration.

Why do we need orchestration in addition to scripting? Production grade IaC at scale requires orchestration versus scripting to deliver advanced features such as intent, policy, governance, and Service Level Agreements. By building IaC to include configuration management, CICD, and other advanced orchestration features, similar benefits to application development are now possible in large scale technology domains (mulit-cloud, containers, campus, WAN, Data Center).

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

With network automation in mind, we see many pit falls with multi-threaded tasks running exclusively from scripts. Step (1) gather facts, Step (2) set conditions, Step (3) loop through items in jinja2 templates and parse to TextFSM and save the data to YAML files. Step (4) Push changes to devices and validate.

This level of scripted multi-threaded workflow is difficult to manage at scale. The main concerns are slow changes, configuration drift, lack of operation state, out of band config overwrites, and disruptive rollbacks. In spite of the current gaps, the Ansible engine is one of my favorite tools for pushing network configurations. One could argue that Tower provides a workflow for the playbooks to manage the order of these tasks but no configuration state is possible. In order to remediate some of these gaps, Ansible engine is adding a new ‘facts’ resource module in a future 2.9 release.

Caution

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Sometimes we automate ourselves into a corner with too many scripts. What happens when major platform changes are made to the scripting tools? We’ve experienced this before with python 2.x to 3.x.and the impact to many of our product SDK libraries. We are seeing it again with the major change to the Ansible engine coming in 2.9 to introduce the ‘facts’ resource module. This change requires users to rewrite their playbooks from scratch to use these features. As a caution, consider limiting scripting to single threaded (CRUD ) actions while shifting the complexities of operational state and rollback to the domain specific orchestration engine.

Domain Orchestration

What is a domain orchestration? A domain orchestration engine focuses on delivering automation targeted to a single technology domain. For instance, the Cisco Network Services Orchestator (NSO) is focused on model driven “network” automation with Netconf and YANG. NSO converts CLI to YANG with network element drivers (NEDS) to supports a multitude of uses cases ranging from stand alone network devices, network services, and multiple controller domains (Meraki, Viptela, and ACI).

In a nut shell, NSO can deploy greenfield or snyc-from brownfield devices to build a transaction based configuration database state. Tools like Ansible engine have modules to integrate with NSO’s northbound JSON API to harness these differentiated capabilities for operations. In the following example, we are using Ansible playbooks with the Ansible NSO/Json module to make CRUD changes to NSO’s configuration database as a means to configure and operate tenants running on a N9K EVPN/VXLAN Data Center network fabrics versus CLI to the stand alone NXOS. The Ansible playbooks are then version controlled as YAML files in a git repository.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Top-level Orchestration

What is Top-level orchestration? A top-level orchestration engine is used to stitch together collaboration, notifications, governance, and source control for other lower level scripting tools and device APIs. Top-level orchestration supports use cases ranging from CICD pipelines for application development to automated infrastructure build and testing. The Cisco Action Orchestrator (AO) is a powerful Top-level orchestrator that enables automated workflows across technology domains and ITSM (ie., ServiceNow). Integrations to ITSM are key for customers who need low or no code catalogs and templates to simplify the delivery of IT services.

Internally, Cisco relies on ITSM and AO to automate the rapid delivery of CiscoLive and Devnet Sandboxes during our customer events. In the below example, the open source tools such as Gitlab work hand-in-hand with AO to create a workflow pipeline to automate the build and test for a tenant configuration across EVPN/VXLAN fabric and SDWAN network domains.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Confusion

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Are you confused by CICD pipelines and their relationship with IaC? My ‘Ah hah’ moment was a realization that many of these DevOps methodologies are not mutually exclusive but highly complementary between AppDev and IaC. As operators, we can support the automated development life-cycle with CICD pipelines. This same knowledge and tools are adaptable to automated infrastructures. Operations can adopt the tools (open source or vendor) that make sense for updating, configuring and management of IaC in many domains.

If you look at AppDev the CICD pipeline for software development must CODE, BUILD, TEST, and DEPLOY the software to an environment that includes infrastructure (compute, network, and storage). Do we build these infrastructure environments ahead of time manually or automated on demand?

If the developer is not willing to patiently wait several weeks for the infrastructure environment to test their CODE, then fully automated IaC is the only answer! A second CICD pipeline managing the configuration, versioning, and alignment of the software build to the environment (test, stage, prod) version allows us to move much quicker and rebuild the environment later if needed.

AppDev CI/CD pipeline to IaC CI/CD pipeline

In the following example, we are using Gitlab to manage an application development CICD pipeline. Upon completion the AppDev pipeline triggers Action Orchestrator to build a second pipeline with workflows to automate the test environment to ultimately test the application stack. The idea is to test the software release in a test environment prior to pushing the same software into production. The Action Orchestrator (AO) has many adapters to make IaC very easy to build and test infrastructure technology domains.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Router/switch software upgrades are another use case for a network specific CICD pipeline. With CICD we can automate the upgrade of specific IOS software versions to devices in a version controlled and tested environment prior to production.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Controllers

Are controllers and orchestration one in the same? NOPE…  Controllers are a single API touch point and management system for Software Defined systems (SD-Access, SD-Wan, and SD-Networks) to manage the configuration state of the underlay and overlay and underlying protocols. Controllers are similar to orchestration by providing access to configuration snapshots and rollbacks, but unable to compose top-level workflows with other tools. In most cases, Controllers are bound to their single technology domain (campus, data center, WAN, or cloud). Often times, IaC is configured adequately with only scripting and source control in a single controller domain. Suffice it to say, when expanding from a single domain to cross domain controllers (ie SDWAN, and SDN) this cross domain integration introduces a catalyst for orchestration.

The Automation Challenge


Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

There is a broad set of technology domains, each with many use cases for IaC. In order to succeed with IaC, we first need to address our automation challenges. From there we can target each specific use case mapped to the appropriate technology domain.

Challenges:

To many touchpoints: Need to consolidate and coordinate tasks using common automation tools.

Complexity: Need to abstract automation as much as possible to make resources consumable for the end users.

Operational Instrumentation: Need to automate and operationalize the tools into workflows that include visual dashboards, role-based access control, and other security services.

Verification: Need to make changes and check changes. With automation, we can move really fast and break things. Hence, we need the proverbial looking over our shoulder versus traditional stare and compare configuration checks. Ideally, verification should start in a test environment through production.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Community and Collaboration: Need to share finished code and avoid recreating the wheel with every workflow.

The key take away for automated solutions is to strive for a sharing culture, agility, simplicity, intent, security, and lower costs.

Technology Domains and Use Cases

The following table depicts the taxonomy of several Cisco orchestration options. As depicted below, the Action Orchestrator is positioned as the glue to bind together the multiple technology domains into a unified workflow.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

What’s Next?


Multi Domain Policy

As our customers continue to strive for end-to-end automation their orchestration workflows are now spanning across multiple technology domains. As these workflows evolve we need to consolidate and coordinate tasks using a common automation platform.

A major step in the “automate everywhere” strategy is to consolidate automation on a Multi-Domain Policy (MDP) platform. Conceptually this upcoming platform is targeted to unify the existing orchestration engines across domains with a consistent UI, catalog, united operations, common segmentation, consistent on-boarding, and delivered on-prem or cloud.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

AI Ops

Logs, telemetry, and health monitoring are currently used to build reactive dashboards for visibility. With the advent of AI Ops, the trend is predictive and self healing operations. AI Ops platforms utilize big data, modern machine learning, and other advanced analytics technologies. This new technology, directly and indirectly, enhances IT operations functions with proactive, personal and dynamic insight. Cisco Intersight is a SaaS addition to the portfolio of domain orchestration engines, making actionable intelligence available from AI Ops in Hyperflex and server domains. AI Ops capabilities are road-mapped into many other orchestration engines as well.

Cisco Learning, Cisco Tutorials and Materials, Cisco Guides, Cisco Online Exam, Cisco Certifications

Wednesday 2 October 2019

Service Providers: The Quest to Attain NFV Enlightenment

The promise of Network Function Virtualisation (NFV) was to lower Total Cost of Ownership (TCO) for the network and to improve service agility (time-to-market). However, like all new technologies, the hype of expectations has subsided over time, and disillusionment has set-in for network Service Providers with false starts on NFV investments.

NFV hype cycle

Cisco Tutorial and Material, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications

Fortunately, with experience comes ‘enlightenment’, and given that NFV has been deployed in live networks for a variety of functions and at various scales for years now, SPs are now able to ascertain the best practice approach and create true business value paired with ‘enhanced productivity’.

Looking forward, NFV platforms will be essential as they serve as the foundation for upcoming architectural shifts; to 5G core, edge computing (MEC) and cloud-native functions.

Best Practice for NFV Platforms


There are various approaches to building an NFV environment, from single-vendor vertical stack to a do-it-yourself (DIY) approach building a platform from various vendor and open-source software.

NFV platform stack approaches

Cisco Tutorial and Material, Cisco Learning, Cisco Tutorials and Materials, Cisco Certifications

Over time, Service Providers have discovered that the Vertical approach may not be as capital-efficient, due to its fixed configuration leaving stranded capacity assets. On the other hand, the DIY approach promises horizontal scaling but the complexities of integration and operations creates prohibitive additional costs, a struggle only the largest service providers are equipped to navigate.

Considering both options, the most efficient approach appears to be a combination of both, packaging the platform in modules for life-cycle management, but remaining open to supporting various virtual network functions (VNFs) to support horizontal scaling. This creates a platform that is Open and Modular with the right balance of multi-function scaling, with carrier-grade operational packaging and a single point of ownership.

Requirements of an Open Modular NFV Platform


An NFV platform must create business value; optimized to lower network TCO and increase service agility.

An open modular NFV platform achieves this with:

◈ A scalable architecture for large core to small edge locations, with common orchestration
◈ Network DC SDN that supports bare-metal, VM and container functions
◈ End to end instrumentation for carrier grade operations
◈ Modular life-cycle management for up-grades
◈ The ability to support a wide range of multi-tenanted VNFs with an eco-system of pre-validated VNFs and a replicable process for new VNF on-boarding

Benefits of an Open Modular NFV Platform


Based on keen observations and conversations with customers, the benefits of this approach have been proven. Personally, I have witnessed service agility (time-to-market) improvements of more than 10x, with new services pushed to launch in a matter of days instead of the several months the process used to require.

Replacing a separate vertical NFV stack deployment with a single open modular platform has shown TCO improvement of more than 30%, attributed to operations, power and rack space reductions.

These results are encouraging, with the tangible business benefits indicating that we have attained enlightenment on how NFV platforms should be built for the Service Providers of the future.

Tuesday 1 October 2019

Threats in encrypted traffic

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides

There was a time when the web was open. Quite literally—communications taking place on the early web were not masked in any significant fashion. This meant that it was fairly trivial for a bad actor to intercept and read the data being transmitted between networked devices.

This was especially troublesome when it came to sensitive data, such as password authentication or credit card transactions. To address the risks of transmitting such data over the web, traffic encryption was invented, ushering in an era of protected communication.

Today more than half of all websites use HTTPS. In fact, according to data obtained from Cisco Cognitive Intelligence, the cloud-based machine learning engine behind Stealthwatch—Cisco’s network traffic analysis solution—82 percent of HTTP/HTTPS traffic is now encrypted.

The adoption of encrypted traffic has been a boon for security and privacy. By leveraging it, users can trust that sensitive transactions and communications are more secure. The downside to this increase in encrypted traffic is that it’s harder to separate the good from the bad. As adoption of encrypted traffic has grown, masking what’s being sent back and forth, it’s become easier for bad actors to hide their malicious activity in such traffic.

A brief history of encrypted traffic


The concerns around security and privacy in web traffic originally led Netscape to introduce the Secure Sockets Layer (SSL) protocol in 1995. After a few releases, the Internet Engineering Task Force (EITF) took over the protocol, which released future updates under then name “Transport Layer Security” (TLS). While the term SSL is often used informally to refer to both today, the SSL protocol has been depreciated and replaced by TLS.

TLS protocol works directly with existing protocols and encrypts the traffic. This is where protocols like HTTPS come from— the hypertext transfer protocol (HTTP) is transmitted over SSL/TLS. While HTTPS is by far the most common protocol secured by TLS, other popular protocols, such as SFTP and SMTPS can take advantage of the protocol. Even lower-level protocols like TCP and UDP can use TLS.

Threat actors follow suit


Attackers go to great pains to get their threats onto systems and networks. The last thing they want after successfully penetrating an organization is to have their traffic picked up by network-monitoring tools. Many threats are now encrypting their traffic to prevent this from happening.

Where standard network monitoring tools might be able to quickly identify and block unencrypted traffic in the past, TLS provides a mask for the communication threats utilize to operate. In fact, according to data taken from Cognitive Intelligence, 63 percent of all threat incidents discovered by Stealthwatch were discovered in encrypted traffic.

In terms of malicious functionality, there are a number of ways that threats use encryption. From command-and-control (C2) communications, to backdoors, to exfiltrating data, attackers consistently use encryption to hide their malicious traffic.

Botnets

By definition, a botnet is a group of Internet-connected, compromised systems. Generally, the systems in a botnet are connected in a client-server or a peer-to-peer configuration. Either way, the malicious actors usually leverage a C2 system to facilitate the passing of instructions to the compromised systems.

Common botnets such as Sality, Necurs, and Gamarue/Andromeda have all leveraged encryption in their C2 communications to remain hidden. The malicious activity carried out by botnets include downloading additional malicious payloads, spread to other systems, perform distributed-denial-of-service (DDoS) attacks, send spam, and other malicious activities.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Botnets mask C2 traffic with encryption.

RATs

The core purpose of a RAT is to allow an attacker to monitor and control a system remotely. Once a RAT manages to implant itself into a system, it needs to phone home for further instructions. RATs require regular or semi-regular connections to the internet, and often use a C2 infrastructure to perform their malicious activities.

RATs often attempt take administrative control of a computer and/or steal information from it, ranging from passwords, to screenshots, to browser histories. It then sends the stolen data back to the attacker.

Most of today’s RATs use encryption in order to mask what is being sent back and forth. Some examples include Orcus RAT, RevengeRat, and some variants of Gh0st RAT.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
RATs use encryption when controlling a computer.

Cryptomining

Cryptocurrency miners establish a TCP connection between the computer it’s running on and a server. In this connection, the computer is regularly receiving work from the server, processing it, then sending it back to the server. Maintaining these connections is critical for cryptomining. Without it the computer would not be able to verify its work.

Given the length of these connections, their importance, and the chance that they can be identified, malicious cryptomining operations often ensure these connections are encrypted.

It’s worth noting that encryption here can apply to any type of cryptomining, both deliberate and malicious in nature. As we covered in our previous Threat of the Month entry on malicious cryptomining, the real difference between these two types of mining is consent.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Miners transfer work back and forth to a server.

Banking trojans

In order for a banking trojan to operate, it has to monitor web traffic on a compromised computer. To do that, some banking trojans siphon web traffic through a malicious proxy or exfiltrate data to a C2 server.

To keep this traffic from being discovered, some banking trojans have taken to encrypting this traffic. For instance, the banking trojan IcedID uses SSL/TLS to send stolen data. Another banking trojan called Vawtrak masks its POST data traffic by using a special encoding scheme that makes it harder to decrypt and identify.

Cisco Study Materials, Cisco Tutorial and Materials, Cisco Learning, Cisco Online Exam, Cisco Guides
Banking trojans encrypt the data they’re exfiltrating.

Ransomware

The best-known use of encryption in ransomware is obviously when it takes personal files hostage by encrypting them. However, ransomware threats often use encryption in their network communication as well. In particular, some ransomware families encrypt the distribution of decryption keys.

How to spot malicious encrypted traffic


One way to catch malicious encrypted traffic is through a technique called traffic fingerprinting. To leverage this technique, monitor the encrypted packets traveling across your network and look for patterns that match known malicious activity. For instance, the connection to a well-known C2 server can have a distinct pattern, or fingerprint. The same applies to cryptomining traffic or well-known banking trojans.

However, this doesn’t catch all malicious encrypted traffic, since bad actors can simply insert random or dummy packets into their traffic to mask the expected fingerprint. To identify malicious traffic in these cases, other detection techniques are required to identify the traffic, such as machine learning algorithms that can identify more complicated malicious connections. Threats may still manage to evade some machine learning detection methods, so implementing a layered approach, covering a wide variety of techniques, is recommended.

In addition, consider the following:

◈ Stealthwatch includes Encrypted Traffic Analytics. This technology collects network traffic and uses machine learning and behavioral modeling to detect a wide range of malicious encrypted traffic, without any decryption.

◈ The DNS protection technologies included in Cisco Umbrella can prevent connections to malicious domains, stopping threats before they’re even able to establish an encrypted connection.

◈ An effective endpoint protection solution, such as AMP for Endpoints, can also go a long way towards stopping a threat before it starts.

Monday 30 September 2019

16 Cable Industry Terms You Need to Know to Get Ready for Cable-Tec 2019

SCTE’s Cable-Tec Expo 2019 is just around the corner (September 30th-October 3rd in New Orleans). Plan on a busy week given the wide range of technology on display in the exhibit hall, the 115 papers being presented in the Fall Technical Forum, numerous panel discussions on the Innovation Stage, keynote presentations during the opening General Session and annual awards luncheon, and so much more. If you’re a newcomer to the industry (or new to Cable-Tec Expo), you may find some of the jargon at the conference a bit overwhelming.

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Guides, Cisco Learning, Cisco Study Materials

I’ve defined 16 terms that you need to know before you go to Cable-Tec 2019:

1. Multiple System Operator (MSO) – A corporate entity such as Charter, Comcast, Cox, and others that owns and/or operates more than one cable system. “MSO” is not intended to be a generic abbreviation for all cable companies, even though the abbreviation is commonly misused that way. A local cable system is not an MSO, either – although it might be owned by one – it’s just a cable system. An important point: All MSOs are cable operators, but not all cable operators are MSOs.

2. Hybrid Fiber/Coax (HFC) – A cable network architecture developed in the 1980s that uses a combination of optical fiber and coaxial cable to transport signals to/from subscribers. Prior to what we now call HFC, the cable industry used all-coaxial cable “tree-and-branch” architectures.

3. Wireless – Any service that uses radio waves to transmit/receive video, voice, and/or data in the over-the-air spectrum. Examples of wireless telecommunications technology include cellular (mobile) telephones, two-way radio, and Wi-Fi. Over-the-air broadcast TV and AM & FM radio are forms of wireless communications, too.

4. Wi-Fi 6 – The next generation of Wi-Fi technology, based upon the Institute of Electrical and Electronics Engineers (IEEE) 802.11ax standard (the sixth 802.11 standard, hence the “6” in Wi-Fi 6), that is said to support maximum theoretical data speeds upwards of 10 Gbps.

5. Data-Over-Cable Service Interface Specifications (DOCSIS®) – A family of CableLabs specifications for standardized cable modem-based high-speed data service over cable networks. DOCSIS is intended to ensure interoperability among various manufacturers’ cable modems and related headend equipment. Over the years the industry has seen DOCSIS 1.0, 1.1, 2.0, 3.0 and 3.1 (the latest deployed version), with DOCSIS 4.0 in the works.

6. Gigabit Service – A class of high-speed data service in which the nominal data transmission rate is 1 gigabit per second (Gbps), or 1 billion bits per second. Gigabit service can be asymmetrical (for instance, 1 Gbps in the downstream and a slower speed in the upstream) or symmetrical (1 Gbps in both directions). Cable operators around the world have for the past couple years been deploying DOCSIS 3.1 cable modem technology to support gigabit data service over HFC networks.

7. Full Duplex (FDX) DOCSIS – An extension to the DOCSIS 3.1 specification that supports transmission of downstream and upstream signals on the same frequencies at the same time, targeting data speeds of up to 10 Gbps in the downstream and 5 Gbps in the upstream! The magic of echo cancellation and other technologies allows signals traveling in different directions through the coaxial cable to simultaneously occupy the same frequencies.

8. Extended Spectrum DOCSIS (ESD) – Existing DOCSIS specifications spell out technical parameters for equipment operation on HFC network frequencies from as low as 5 MHz to as high as 1218 MHz (also called 1.2 GHz). Operation on frequencies higher than 1218 MHz is called extended spectrum DOCSIS, with upper frequency limits as high as 1794 MHz (aka 1.8 GHz) to 3 GHz or more! CableLabs is working on DOCSIS 4.0, which will initially spell out metrics for operation up to at least 1.8 GHz.

9. Cable Modem Termination System (CMTS) – An electronic device installed in a cable operator’s headend or hub site that converts digital data to/from the Internet to radio frequency (RF) signals that can be carried on the cable network. A converged cable access platform (CCAP) can be thought of as similar to a CMTS. Examples include Cisco’s uBR-10012 and cBR-8.

10. 5G – According to Wikipedia, “5G is the fifth generation cellular network technology.” You probably already have a smart phone or tablet that is compatible with fourth generation (4G) cellular technology, the latter sometimes called long term evolution (LTE). Service providers are installing new towers in neighborhoods to support 5G, which will provide their subscribers with much faster data speeds. Those towers have to be closer together (which means more of them) because of plans to operate on much higher frequencies than earlier generation technology. So, what does 5G have to do with cable? Plenty! For one thing, the cable industry is well-positioned to partner with telcos to provide “backhaul” interconnections between the new 5G towers and the telcos’ facilities. Those backhauls can be done over some of our fiber, as well as over our HFC networks using DOCSIS.

11. 10G – Not to be confused with 5G, this term refers to the cable industry’s broadband technology platform of the future that will deliver at least 10 gigabits per second to and from the residential premises. 10G supports better security and lower latency, and will take advantage of a variety of technologies such as DOCSIS 3.1, full duplex DOCSIS, wireless, coherent optics, and more.

12. Internet of Things (IoT) – IoT is simply the point in time when more ‘things or objects’ were connected to the Internet than people. Think of interconnecting and managing billions of wired and wireless sensors, embedded systems, appliances, and more. Making it all work, while maintaining privacy and security, and keeping power consumption to a minimum are among the challenges of IoT.

13. Distributed Access Architecture (DAA) – An umbrella term that, according to CableLabs, describes relocating certain functions typically found in a cable network’s headends and hub sites closer to the subscriber. Two primary types of DAA are remote PHY and flexible MAC architecture, described below. Think of the MAC (media access control) as the circuitry where DOCSIS processing takes place, and the PHY (physical layer) as the circuitry where DOCSIS and other RF signals are generated and received.

14. Remote PHY (R-PHY) – A subset of DAA in which a CCAP’s PHY layer electronics are separated from the MAC layer electronics, typically with the PHY electronics located in a separate shelf or optical fiber node. A remote PHY device (RPD) module or circuit is installed in a shelf or node, and the RPD functions as the downstream RF signal transmitter and upstream RF signal receiver. The interconnection between the RPD and the core (such as Cisco’s cBR-8) is via digital fiber, typically 10 Gbps Ethernet.

15. Flexible MAC Architecture (FMA) – Originally called remote MAC/PHY (in which the MAC and PHY electronics are installed in a node), FMA provides more flexibility regarding where MAC layer electronics can be located: headend/hub site, node (with the PHY electronics), or somewhere else.

16. Cloud – I remember seeing a meme online that defined the cloud a bit tongue-in-cheek as “someone else’s computer.” When we say cloud computing, that often means the use of remote computer resources, located in a third-party server facility and accessed via the Internet. Sometimes the server(s) might be in or near one’s own facility. What is called the cloud is used for storing data, computer processing, and for emulating certain functionality in software that previously relied upon dedicated local hardware.

SP360: Service Provider, Cisco Tutorials and Materials, Cisco Guides, Cisco Learning, Cisco Study Materials

There are many more terms and phrases you’ll see and hear at Cable-Tec Expo than can be covered here. If you find something that has you stumped, stop by Cisco’s booth (Booth 1301) and ask one of our experts.

Saturday 28 September 2019

Cable Service Providers: You Have a New Game to Play. And You Have the Edge

Time to take gaming seriously


Video gaming is huge, by any measure you choose. By revenue, it’s expected to be more than $150 billion in 2019, making it bigger than movies, TV and digital music, with a strong 9% CAGR.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

And it’s not just teenagers. Two-thirds of adult Americans — your paying customers — call themselves gamers.

This makes gaming one of the biggest opportunities you face as a cable provider today. But how can you win those gamers over and generate revenue from them?

New demands on the network


A person’s overall online gaming experience depends on lots of factors, from their client device’s performance through to the GPUs in the data center. But the network — your network — clearly plays a critical role. And gaming related traffic is already growing on your networks. Cisco’s VNI predicts that by 2022, gaming will make up 15% of all internet traffic.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam
When Gamers talk about “lag” affecting their play, saying they care about milliseconds of latency, they really mean overall performance – latency, jitter, drops. Notice how latency changes the gamer’s position (red vs blue) in the below screenshot:

Many would even pay a premium not just for a lower ping time, but also for a more stable one, with less jitter and no dropped packets. Deterministic SLAs become key.

But latency, jitter and drops aren’t the only factors here. Gamers also need tremendous bandwidth, especially for:

◈ Downloading games (and subsequent patches) after the purchase. Many games can exceed 100 GB in size!

◈ Watching others’ video gameplay on YouTube or Twitch. The most popular gamer on YouTube has nearly 35 million subscribers!

◈ Playing the games using cloud gaming services such as the upcoming Google Stadia. 4K cloud gaming could require around 15 GB per hour, twice that of a Netflix 4K movie.

In many cases, the upstream is as important as downstream bandwidth — an opportunity for cable ISPs to differentiate themselves on all those factors with end-to-end innovations.

Your chance to lead


As a cable ISP, you’re the first port of call for gamers looking for a better experience. You can earn their loyalty with enhanced Quality of Experience and even drive new premium service revenue from it.

There’s opportunity for you to be creative, forging new partnerships with gaming providers, hosting gaming servers in your facilities, and even providing premium SLAs for your gaming customers along with new service plans.

But there’s plenty to be done in the network to make these opportunities real.

At the SCTE Expo, we will be discussing specific recommendations for each network domain. To give a teaser, in access network domain, you need to take action to reduce congestion and increase data rate, setting up prioritized service flows for gaming to assure QoS. New technologies like Low Latency DOCSIS (LLD) will be critical for delivering the performance your customers want, optimizing traffic flows and potentially delivering sub-1ms latency without any need to overhaul your HFC network infrastructure itself. In the peering domain, you need to … OK, let’s save that for the live discussion. We will be happy to help on all those fronts.

The cable edge is your competitive edge


Gaming is not the only low-latency use case in town. For example, Mobile Xhaul (including CBRS) and IoT applications depend on ultra-responsive and reliable network connectivity between nodes. And there are plenty of other use cases beyond gaming that are putting new strains on pure capacity, including video and CDNs.

All of these use cases too will benefit from the traffic optimization that LLD enables, but it’s only part of the solution.

IT companies of all shapes and sizes are recognizing that for many of these use cases, putting application compute closer to the customers, at the edge, is the only way forward. 

After all, the best way to reduce latency (and offer better experience) is to cut the route length by hosting application workloads as geographically and topologically close to the customers as possible. This approach also reduces the need for high network bandwidth capacity to the centralized data centers for bandwidth-heavy applications like video and CDN.

Imagine a gaming server colocated in a hub giving local players less than 10ms latency/jitter. Or a machine-vision application that monitors surveillance camera footage for alerts right at the edge, eliminating the need to send the whole livestream back to a central data center. The possibilities are endless.

Expand your hub sites into application cloud edge


In the edge world, your real differentiator becomes the thousands of hub sites that you use to house your CMTS/CCAP, EQAM and other access equipment — sites that SaaS companies and IT startups simply can’t replicate. Far from being a liability to shed and consolidate, this distributed infrastructure is one of your critical advantages. 

By expanding the role of your hub sites into application cloud edge sites, you can increase utilization of your existing infrastructure (for example, cloud-native CCAP), and generate revenue (for example, hosting B2B applications), both by innovating new services of your own and by giving third-party service providers access to geographic proximity to their B2B and B2C users. 

If you’re also a mobile operator, this model also allows you to move many virtualized RAN functions for into your hub sites, leaving a streamlined set of functions on the cell site itself (this edge cloud model is one that Rakuten is using for its 5G-ready rollout, across 4,000 edge sites).

Making cable edge compute happen


We’ve introduced the concept of Cable Edge Compute, describing how you can turn your hubs into next generation application-centric cloud sites to capture this wide-ranging opportunity.

While edge compute architectures do present a number of challenges — from physical space, power & cooling constraints to extra management complexity and new investment in equipments — these are all solvable with the optimal innovations, right design and management approaches. It’s vital to approach an initiative like this with an end-to-end service mindset, looking at topics like assurance, orchestration and scalability from the start.

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

Four key ingredients for cable edge compute


Here are essentially four key ingredients for a cost-optimized cable edge compute architecture: 

1. Edge hardware: takes the form of standardized, common SKU modular pods with x86 compute nodes and top-of-rack switches, with optional GPUs and other specialized processing acceleration for specific applications. Modularity, Consistency and flexibility are key here, so as to be able to scale easily. 

2. Software stack: enables the Edge hardware to optimally host a wide range of virtualized applications in containers or VMs or both, whether managed through Kubernetes or Openstack or something else. What’s important is to minimize the x86 CPU cores usage by the software stack and provide deterministic performance. Cisco has made it possible by combining cloud controller nodes with the compute nodes at the edge, but moving storage nodes and management nodes to the remote clusters with specific optimization and security. This optimizes the usage of physical space and power in the Hub site.  

3. Network Fabric: provides ultra-fast connectivity for the application workloads to communicate with each other and consumers. A one- or two-tier programmable network fabric based on 10GE/40GE/100GE/400GE Ethernet transport with end-to-end IPv6 and BGP-based VPN. 

4. And finally, this infrastructure model depends totally on SDN with automation, orchestration and assurance. Configuration and provisioning must be possible remotely via intent files, for example. At this scale, with an architecture this distributed, tasks should be zero-touch across the lifecycle. Assurance is utterly foundational, both to assure appropriate per-application SLAs and the enforcement of policy around prioritization, security and privacy. 

Check Point Learning, Check Point Tutorials and Materials, Check Point Certifications, Check Point Online Exam

Discover your opportunity with low-latency and edge compute


In the new world of low-latency apps delivered through the edge, cable SPs are in a great position.  

And there’s never been a better opportunity to learn more about what this future holds. Cisco CX is presenting at SCTE Cable-Tec Expo on the gaming opportunity and cable edge compute, and we’ve published two new white papers that you can consult as an SCTE member.

Friday 27 September 2019

Best Practices for Using Cisco Security APIs

Like programming best practices, growing your organization’s use of Application Programming Interfaces (APIs) comes with its own complexities. This blog post will highlight three best practices for ensuring an effective and efficient codebase. Additionally, a few notable examples from Cisco DevNet are provided to whet your appetite.

API best practices


1. Choose an organization-wide coding standard (style guide) – this cannot be stressed enough.

With a codified style guide, you can be sure that whatever scripts or programs are written will be easily followed by any employee at Cisco. The style guide utilized by the Python community at Cisco is Python’s PEP8 standard.

An example of bad vs good coding format can be seen in the following example of dictionary element access. I am using this example because here at Cisco many of our APIs use the JSON (JavaScript Object Notation) format, which, at its most basic, is a multi-level dictionary. Both the “bad” and “good” examples will work; however, the style guide dictates that the get() method should be used instead of the has_key() method for accessing dictionary elements:

Bad:

d = {'hello': 'world'}
if d.has_key('hello'):
    print d['hello']    # prints 'world
else:
    print 'default_value'

Good:

d = {'hello': 'world'}
print d.get('hello', 'default_value') # prints 'world'
print d.get('thingy', 'default_value') # prints 'default_value'
# Or:
if 'hello' in d:
print d['hello']

2. Implement incremental tests – start testing early and test early. For Python, incremental testing takes the form of the “unittest” unit testing framework. Utilizing the framework, code can be broken down into small chunks and tested piece by piece. For the examples below, a potential breakdown of the code into modules could be similar to the following: test network access by pinging the appliance’s management interface, run the API GET method to determine which API versions are accepted by the appliance, and finally parse the output of the API GET method.

In this example I will show the code which implements the unit test framework. The unit test code is written in a separate python script and imports our python API script. For simplicity we will assume that the successful API call (curl -k -X GET –header ‘Accept: application/json’ ‘https://192.168.10.195/api/versions’) always returns the following json response:

{
"supportedVersions":["v3", "latest"]
}

Please review the full example code at the end of this blog post.

3. Document your code – commenting is not documenting. While Python comments can be fairly straight forward in some cases, most, if not all style guides will request the use of a documentation framework. Cisco, for example, uses PubHub. Per the Cisco DevNet team, “PubHub is an online document-creation system that the Cisco DevNet group created” (DevNet, 2019).

Cisco Security APIs, Cisco Study Materials, Cisco Guides, Cisco Tutorials and Materials

DevNet Security


A great introduction to API development with Cisco Security products can be had by attending a DevNet Express Security session. (The next one takes place Oct 8-9 in Madrid.) During the session the APIs for AMP for Endpoints (A4E), ISE, ThreatGrid, Umbrella, and Firepower are explored. A simple way to test your needed API call is to utilize a program called curl prior to entering the command into a python script. For the examples below, we will provide the documentation as well as the curl command needed. Any text within ‘<‘ and ‘>’ indicates that user-specific input is required.

For A4E, one often-used API call lists and provides additional information about each computer. For A4E, a curl-instantiated API call follows this format:
“curl -o output_file.json https://clientID:APIKey@api.amp.cisco.com/v1/function“.

To pull the list of computers, the API call looks like this:
curl -o computers.json https://<client_id>:<api_key>@api.amp.cisco.com/v1/computers.

In ISE a great API call to use queries the monitoring node for all reasons for authentication failure on that monitoring node. To test this API call, we can implement the following curl command:
curl -o output_file.json https://acme123/admin/API/mnt/FailureReasons. The API call’s generic form is as follows: https://ISEmntNode.domain.com/admin/API/mnt/<specific-api-call>.

Please note that you will need to log in to the monitoring node prior to performing the API call.

A common ThreatGrid API call performed is one where a group of file hashes is queried for state (Clean, Malicious, or Unknown). The API call is again visualized as a curl command here for simplicity. In ThreatGrid, the call follows this format:
“curl –request GET “https://panacea.threatgrid.com/api/v2/function?api_key=<API_KEY>&ids=[‘uuid1′,’uuid2’]” –header “Content-Type: application/json”“.

The full request is as follows:
curl –request GET “https://panacea.threatgrid.com/api/v2/samples/state?api_key=<API_KEY>&ids=[‘ba91b01cbfe87c0a71df3b1cd13899a2’]” –header “Content-Type: application/json”.

Umbrella’s Investigate API is the most used of the APIs provided by Umbrella. An interesting API call from Investigate provides us with the categories of all domains provided. As before, curl is used to visualize the API call. In this example, we want to see which categories google.com and yahoo.com fall into:

curl –include –request POST –header “Authorization: Bearer %YourToken%” –data-binary “[\”google.com\”,\”yahoo.com\”]” https://investigate.api.umbrella.com/domains/categorization.

Finally, Firepower’s Threat Defense API. Since deploying changes is required for most all modifications performed to the FTD system, the FTD API’s most useful call deploys configured changes to the system. The curl command needed to initiate a deploy and save the output to a file is:
“curl -o output_file.json -X POST –header ‘Content-Type: application/json’ –header ‘Accept: application/json’ https://ftd.example.com/api/fdm/v3/operational/deploy“.

Open the file after running the command to ensure that the state of the job is “QUEUED”.

Cisco Security APIs, Cisco Study Materials, Cisco Guides, Cisco Tutorials and Materials

Unit Testing Example Code


apiGET.py


#! /usr/bin/python3

"""
apiGET.py is a python script that makes use of the requests module to ask
the target server what API Versions it accepts.
There is only one input (first command line argument) which is an IP address.
The accepted API versions are the output.

To run the script use the following syntax:
./apiGET.py &lt;IP address of server&gt;
python3 apiGET.py &lt;IP address of server&gt;
"""

import requests # to enable native python API handling
import ipaddress # to enable native python IP address handling
import sys # to provide clean exit statuses in python
import os # to allow us to ping quickly within python

def sanitizeInput(inputs):
    """ if there is more than one command line argument, exit """
    if len(inputs) != 2:
        print('Usage: {}  &lt;IP address of API server&gt;'.format(inputs[0]))
        sys.exit(1)

    """ now that there is only one command line argument, make sure it's an IP &amp; return """
    try:
        IPaddr = ipaddress.ip_address(inputs[1])
        return IPaddr
    except ValueError:
        print('address/netmask is invalid: {}'.format(inputs[1]))
        sys.exit(1)
    except:
        print('Usage: {}  &lt;IP address of API server&gt;'.format(inputs[0]))
        sys.exit(1)

def getVersions(IPaddr):
    """ make sure IP exists """
    if (os.system("ping -c 1 -t 1 " + IPaddr) != 0):
        print("Please enter a useable IP address.")
        sys.exit(1)

    """ getting valid versions using the built-in module exceptions to handle errors """
    r = requests.get('https://{}/api/versions'.format(str(IPaddr)), verify=False)

    try:
        r.raise_for_status()
    except:
        return "Unexpected error: " + str(sys.exc_info()[0])

    return r

if __name__ == "__main__":
    print(getVersions(sanitizeInput(sys.argv)).text + '\n')

apiGETtest.py


#! /usr/bin/python3

"""
apiGETtest.py is a python script that is used to perform
unittests against apiGET.py. The purpose of these tests is
to predict different ways users can incorrectly utilize
apiGET.py and ensure the script does not provide
an opportunity for exploitation.
"""

""" importing modules """
import os # python module which handles system calls
import unittest # python module for unit testing
import ipaddress # setting up IP addresses more logically
import requests # to help us unittest our responses below

""" importing what we're testing """
import apiGET # our actual script

""" A class is used to hold all test methods as we create them and fill out our main script """
class TestAPIMethods(unittest.TestCase):
    def setUp(self):
        """
        setting up the test
        """

    def test_too_many_CLI_params(self):
        """
        test that we are cleanly implementing command-line sanitization
        correctly counting
        """
        arguments = ['apiGET.py', '127.0.0.1', 'HELLO_WORLD!']
        with self.assertRaises(SystemExit) as e:
            ipaddress.ip_address(apiGET.sanitizeInput(arguments))
        self.assertEqual(e.exception.code, 1)

    def test_bad_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        eliminating bad input
        """
        arguments = ['apiGET.py', '2540abc']
        with self.assertRaises(SystemExit) as e:
            apiGET.sanitizeInput(arguments)
        self.assertEqual(e.exception.code, 1)

    def test_good_IPv4_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        good for IPv4
        """
        arguments = ['apiGET.py', '127.0.0.1']
        self.assertTrue(ipaddress.ip_address(apiGET.sanitizeInput(arguments)))

    def test_good_IPv6_CLI_input(self):
        """
        test that we are cleanly implementing command-line sanitization
        good for IPv6
        """
        arguments = ['apiGET.py', '::1']
        self.assertTrue(ipaddress.ip_address(apiGET.sanitizeInput(arguments)))

    def test_default_api_call_to_bad_IP(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will fail because the IP address 192.168.45.45 does not exist
        """
        with self.assertRaises(SystemExit) as e:
            apiGET.getVersions('192.168.45.45')
        self.assertEqual(e.exception.code, 1)

    def test_default_api_call_to_no_API(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will fail because the IP address does exist but does not have an exposed API
        """
        with self.assertRaises(requests.exceptions.ConnectionError):
            apiGET.getVersions('192.168.10.10')

    def test_default_api_call(self):
        """
        test our API GET method which returns the supported API versions,
        in this case 'v3' and 'latest'
        this test will succeed because the IP address does exist and will respond
        """
        self.assertEqual(apiGET.getVersions('192.168.10.195').text,
            '{\n    "supportedVersions":["v3", "latest"]\n}\n')

if __name__ == '__main__':
    unittest.main()

Thursday 26 September 2019

The Artificial Intelligence Journey in Contact Centers

I would like to share some thoughts pulled together in discussions with developers, customer care system integrators and experts, along one of the many possible journeys to unleash all the power of Artificial Intelligence (AI) into a modern customer care architecture.

First of all, let me emphasize what the ultimate goal of Artificial Intelligence is:

“Artificial intelligence is the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.”

AI in the Contact Center


In a Customer Care (CC) environment, one of the primary objectives of any AI component is to support and assist agents in their work and potentially even replace some of them, so we should ask ourselves  how far are we from such a goal today? Let me answer by posting here a recent demo on stage from Google:

Google Duplex: A.I. Assistant Calls Local Businesses To Make Appointments – YouTube

Seen it? While clearly Google Duplex is still in its infancy, I think it’s evident that a much wider application range is behind the corner and we are not too far away since the moment where a virtual agent controlled by an AI engine will be able to replace the real agents in many contexts and with a natural conversation and flow like humans would have with each other.

“Gartner projects that by 2020, 10% of business-to-consumer first level engagement requests will be taken by VCAs… today that is less than 1%….”

Google Duplex Natural Technology


The interesting piece, which leading players such as Cisco (Cognitive Collaboration Solutions) will be able to turn into an advantage for their Contact Center architecture, is related to the way Google Duplex works. It is essentially made of the 3 building blocks:

1. The incoming audio goes into an Automatic Speech Recognition (ASR) able to translate audio into text
2. A Recurrent Neural Network (RNN) formulates a text answer based on the input text
3. Other inputs and finally a Text to Speech (TTS) converts the answer back in audio using speech synthesis technology (Wavenet)

Cisco Tutorials and Materials, Cisco Certifications, Cisco Study Materials, Cisco Online Exam, Cisco Guides

For those of you dealing with customer care it’s rather clear how such an architecture would fit very well into an outbound contact center delivering telemarketing campaigns: this is the way Google is already positioning the technology in combination with their Google assistant.

A part the wonderful capabilities offered by the front and back end text to audio converters, the intelligence of the system is in the Recurrent Neural Network (RNN) able to analyze the input text and context, understand it and formulate a text answer in real time, de facto emulating the complex behavior and processes of human beings.

Cisco Tutorials and Materials, Cisco Certifications, Cisco Study Materials, Cisco Online Exam, Cisco Guides
The most of the CHAT BOTs used today in CC are not even close to this approach as they are doing the dialogue management in a traditional way, managing flows with certain rules, similar to a complex IVR, struggling with the complexity of natural language. Duplex (Tensorflow) or other solutions, such as open sources communities in the world of AI developers (Rasa Core), are adopting neural networks, properly trained and tuned in a specific context, to offer incredibly natural dialogue management. The RNN needs training, which was in the case of Duplex done using a lot of phone conversations data in a specific context as well as features from the audio and the history of the conversation.

CISCO and AI in Contact Centers


Some of the unique approaches explained above could make the customer care solutions of those vendors able to adopt them quite innovative and well perceived, especially when there is an underlying, solid and reliable architectural approach as a foundation. In news out last year, Cisco announced a partnership with Google:

“Give your contact center agents an AI-enhanced assist so they can answer questions quicker and better. …. we are adding Google Artificial Intelligence (AI) to our Cisco Customer Journey Solutions … Contact Center AI is a simple, secure, and flexible solution that allows enterprises with limited machine learning expertise to deploy AI in their contact centers. The AI automatically provides agents with relevant documents to help guide conversations and continuously learns in order to deliver increasingly more relevant information over time. This combination of Google’s powerful AI capabilities with Cisco’s large global reach can dramatically enhance the way companies interact with their customers.”

This whole AI new paradigm demands further thoughts:

1. The most of AI technology is OPEN SOURCE so the value is not in the code itself but more in the way it is made part of a commercial solution to meet customer needs, especially when it is based on an architectural approach, such as the Cisco Cognitive Collaboration..

2. The above point is further driven by the fact that it is difficult to build a general-purpose AI solution, as it will always need customizations according to the specific context of a customer or another. The use cases change and the speed of innovation is probably faster than in the mobile devices’ world, so it is difficult to manage this via a centralized, traditional R&D. This fits more into a community approach made of developers, system integrators and business experts such as the Cisco Ecosystems.

3. Rather then coding AI software, the winning factor is the ability of vendors like Cisco to leverage an environment made of Ecosystem partners, System Integrators and Customer experts to surround and enrich the core Customer Care architecture offerings.

The availability of ecosystem partners, able to package specific CONVERSATIONAL engines, specialized for certain contexts and the role of system integrators, able to combine those engines into the Cisco Customer Care architecture to meet the customer needs, are KEY CISCO competitive advantages in the coming AI revolution.