Tuesday, 24 May 2022

Broadband Planning: Who Should Lead, and How?

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Planning, Cisco Jobs, Cisco Skills, Cisco News

As new Federal funding is released to help communities bridge the digital divide, you’ll need to gain a strong understanding of the solutions and deployment options available. Often overlooked, however, is the need to develop and commit to a realistic and inclusive broadband planning process. One that acknowledges the broad variety of stakeholders you’ll encounter and offers a realistic timeline to meet funding mandates. You’ll also need a strong leader. But who should lead and what should the process look like?

Why broadband planning is critical

As a licensed Landscape Architect and environmental planner, I’ve had the opportunity to work with state and local government leaders on a variety of infrastructure projects. In each case, we created and adhered to a detailed planning process. The projects ranged from a few acres to 23,000 acres, from roadways and utilities to commercial and residential communities. Even campuses and parks. In each case, sticking to a detailed planning process made things go smoother, resulting in a more successful project.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Planning, Cisco Jobs, Cisco Skills, Cisco News
As critical infrastructure, broadband projects should adopt the same approach. You’ll benefit greatly by leveraging a well thought out collaborative planning model. Your stress levels will be reduced, your stakeholders happier, and the outcome more resilient and sustainable.

Using a collaborative planning model helps accomplish this by:

◉ Establishing a clear vision and goals

◉ Limiting the scope of the project, preventing “scope-creep”

◉ Creating dedicated milestones to keep you on track

◉ Providing transparency for all stakeholders

◉ Setting a realistic timeline to better plan and promote your project.

Using a collaborative broadband planning process also creates a reference source for media outreach and promotion as milestones are reached. Lastly, by having a recorded process, funding mandates or data reporting can be more easily reported, keeping you and your team in compliance.

Who should lead broadband planning?

My involvement in traditional infrastructure planning has allowed me to experience first-hand how comfortable government personnel are in leading large-scale projects. Why are they? Because:

◉ They’re well versed in local ordinances, regulatory laws, and community standards

◉ They understand their community and its people

◉ They have established relationships that cross the public and private sector.

That’s why I, and many others in the IT industry, feel these same state and local government leaders can offer the most success leading broadband planning in their communities.

In addition, those in planning-specific positions are especially suited to do so, having unique skill sets that address:

◉ What type of infrastructure is needed and where to locate it

◉ Gathering realistic data via surveys, GIS mapping, and canvassing

◉ Construction issues that may serve as potential roadblocks or opportunities

◉ Understanding potential legal and maintenance issues.

A realistic broadband planning process

To help our partners in the public and private sectors achieve greater success in their broadband efforts, we’ve created a new guide. It outlines a realistic, inclusive broadband planning process, including suggested timelines and milestones.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Planning, Cisco Jobs, Cisco Skills, Cisco News

By leveraging our new guide, titled Powering a Future of Inclusive Connectivity, your team can increase the comfort level among stakeholders, increasing buy-in to your project. Moreover, you’ll learn:

◉ Key questions to ask when seeking funding

◉ Considerations when building public/private partnerships

◉ The five steps you need to implement a strong and transparent planning process (including suggested timelines and associated milestones)

◉ Use cases.

Funding for broadband


Up to $800 billion in direct and indirect investment is available over the next 5-10 years to fund broadband. This includes the Federal Coronavirus Aid, Relief and Economic Security Act (CARES) and the American Rescue Plan Act (ARPA). Plus, the Infrastructure Investment and Jobs Act (IIJA).

Each program is unique, so understanding them can be a challenge. As you start your broadband planning process, I encourage you to reach out to the Cisco Public Funding Office. Their experts will be glad to help answer questions and guide you through the funding opportunities that best fit your needs.

Source: cisco.com

Sunday, 22 May 2022

How Cisco DNA Assurance Proves It’s ‘Not a Network Issue’

Cisco DNA Assurance, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco DNA

When something in your house breaks, it’s your problem. When something in your network breaks, it’s everyone’s problem. At least, that’s how it can feel when the sudden influx of support tickets, angry phone calls, and so on start rolling in. They quickly remind you that those numbers behind the traffic visualizations are more than numbers alone. They represent individuals. That includes individuals who don’t notice how the infrastructure supports them until suddenly… it’s not.

The adage that “time is money” applies here, and maybe better than anywhere else. Because when users on the network cannot do what they came to do, the value of their halted actions can add up quickly. That means reaction can’t be the first strategy for preserving a network. Instead, proactive measures that prevent problems (ha, alliteration) become first-order priorities.

That’s where Cisco DNA Center and Assurance comes in, and along with it, Leveraging Cisco Intent-Based Networking DNA Assurance (DNAAS) v2.0, the DNAAS course.

Let’s Start with Intent

This will come as no surprise to anyone, but networks are built for a purpose. From a top-down perspective, the network provides the infrastructure necessary to support business intent. Cisco DNA Center allows network admins and operators to make sure that the business intent is translated into network design and functionality. This ensures that the network is actually accomplishing what is needed. Cisco DNA Center has a load of tools, configs, and templates to make the network functional.

What is Cisco DNA Assurance?

Cisco DNA Assurance is the tool that keeps the network live. With it, we can use analytics, machine learning, and AI to understand the health of the intent-based network. DNA Assurance can identify problems before they manifest into critical issues. DNA Assurance allows us to gauge the overall health of the network across clients, devices, and applications and establish an idea of overall health. From there, we can troubleshoot and identify consistent issues compared to the baseline health of the network — before those issues have a significant impact. We don’t have to wait for an outage to act. (Or react.)

We’re no longer stuck in this red-light or green-light situation, where the network is either working or it’s not. When the light goes from green to yellow, we can start saying, “Hey, why is that happening? Let’s get to the root cause and fix it.”

Obviously, this was all-important before the big shift to hybrid work environments, but it’s even more critical now. When you have a problem, you can’t just walk down the hall to the IT guy, you’re sort of stranded on an island, hoping someone else can figure out what’s wrong. And on the other hand, when you’re the person tasked with fixing those problems, you want to know what’s going on as quickly as possible.

One customer I worked with installed Cisco DNA Assurance to ‘prove the innocence of the network.’ He felt that being able to quickly identify the network problem, especially if it was not necessarily a network issue, helped to get fixes done more quickly and efficiently. DNA Assurance helped to rule out the network or ‘prove it was innocent’ and allow him to narrow his troubleshooting focus.

Another benefit of DNA Assurance is that it’s built on Cisco’s expertise. 30+ years of experience with troubleshooting networks and devices have gone into developing Assurance. Its technology doesn’t just give you an overview of the network, it lets you know where things are going wrong and helps you discover solutions.

About the DNAAS course

Leveraging Cisco Intent-Based Networking DNA Assurance (DNAAS) v2.0 is the technology training course we developed to teach users about Cisco DNA Assurance. The course is designed to give a clear understanding of what DNA Assurance can do and to build a deep knowledge of the capabilities of the technology. It’s meant to give new users a firm handle on the technology while increasing the expertise of existing users and empowering them to further optimize their implementation of DNA Assurance.

One of the things we wanted to do was highlight some of the areas that users may not have touched on before. We give them a chance to experience those things and potentially roll them into tangible solutions on their own network. It’s all meant to be immediately actionable. Users can take this course and instantly turn back around and do something with the knowledge.

Labs are one of the ways that we’ve focused on bringing more of the experience to users who are taking the course. New users are going to interact with a real DNA Center instance, and experienced users are going to have the chance to see new configurations. We build out the fundamental skills necessary to use DNA Assurance, rather than focusing on strict use cases.

We treated it like learning to drive a car. We could teach you all the specifics about one highly specialized vehicle, or we could give you the foundational skills necessary to drive anything and allow you to work towards your specific needs.

Overall, students are going to expand their practical knowledge of DNA Assurance and gain actionable skills they can immediately use. DNAAS is an excellent entry into the technology for new users and an equally excellent learning opportunity for experienced users. It helps build important skills that help users to get the most out of the technology and keep their networks running smoothly.

Source: cisco.com

Saturday, 21 May 2022

ChatOps: Managing Kubernetes Deployments in Webex

This is the third post in a series about writing ChatOps services on top of the Webex API. In the first post, we built a Webex Bot that received message events from a group room and printed the event JSON out to the console. In the second, we added security to that Bot, adding an encrypted authentication header to Webex events, and subsequently adding a simple list of authorized users to the event handler. We also added user feedback by posting messages back to the room where the event was raised.

In this post, we’ll build on what was done in the first two posts, and start to apply real-world use cases to our Bot. The goal here will be to manage Deployments in a Kubernetes cluster using commands entered into a Webex room. Not only is this a fun challenge to solve, but it also provides wider visibility into the goings-on of an ops team, as they can scale a Deployment or push out a new container version in the public view of a Webex room. You can find the completed code for this post on GitHub.

This post assumes that you’ve completed the steps listed in the first two blog posts. You can find the code from the second post here. Also, very important, be sure to read the first post to learn how to make your local development environment publicly accessible so that Webex Webhook events can reach your API. Make sure your tunnel is up and running and Webhook events can flow through to your API successfully before proceeding on to the next section. In this case, I’ve set up a new Bot called Kubernetes Deployment Manager, but you can use your existing Bot if you like. From here on out, this post assumes that you’ve taken those steps and have a successful end-to-end data flow.

Architecture

Let’s take a look at what we’re going to build:

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

Building on top of our existing Bot, we’re going to create two new services: MessageIngestion, and Kubernetes. The latter will take a configuration object that gives it access to our Kubernetes cluster and will be responsible for sending requests to the K8s control plane. Our Index Router will continue to act as a controller, orchestrating data flows between services. And our WebexNotification service that we built in the second post will continue to be responsible for sending messages back to the user in Webex.

Our Kubernetes Resources


In this section, we’ll set up a simple Deployment in Kubernetes, as well as a Service Account that we can leverage to communicate with the Kubernetes API using the NodeJS SDK. Feel free to skip this part if you already have those resources created.

This section also assumes that you have a Kubernetes cluster up and running, and both you and your Bot have network access to interact with its API. There are plenty of resources online for getting a Kubernetes cluster set up, and getting kubetcl installed, both of which are beyond the scope of this blog post.

Our Test Deployment

To keep thing simple, I’m going to use Nginx as my deployment container – an easily-accessible image that doesn’t have any dependencies to get up and running. If you have a Deployment of your own that you’d like to use instead, feel free to replace what I’ve listed here with that.

# in resources/nginx-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
    name: nginx-deployment
  labels:
      app: nginx
spec:
  replicas: 2
  selector:
    matchLabels:
      app: nginx
template:
  metadata:
    labels:
      app: nginx
  spec:
    containers:
    - name: nginx
      image: nginx:1.20
      ports:
      - containerPort: 80

Our Service Account and Role

The next step is to make sure our Bot code has a way of interacting with the Kubernetes API. We can do that by creating a Service Account (SA) that our Bot will assume as its identity when calling the Kubernetes API, and ensuring it has proper access with a Kubernetes Role.

First, let’s set up an SA that can interact with the Kubernetes API:

# in resources/sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: chatops-bot

Now we’ll create a Role in our Kubernetes cluster that will have access to pretty much everything in the default Namespace. In a real-world application, you’ll likely want to take a more restrictive approach, only providing the permissions that allow your Bot to do what you intend; but wide-open access will work for a simple demo:

# in resources/role.yaml
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  namespace: default
  name: chatops-admin
rules:
- apiGroups: ["*"]
  resources: ["*"]
  verbs: ["*"]

Finally, we’ll bind the Role to our SA using a RoleBinding resource:

# in resources/rb.yaml
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: chatops-admin-binding
  namespace: default
subjects:
- kind: ServiceAccount
  name: chatops-bot
  apiGroup: ""
roleRef:
  kind: Role
  name: chatops-admin
  apiGroup: ""

Apply these using kubectl:

$ kubectl apply -f resources/sa.yaml
$ kubectl apply -f resources/role.yaml
$ kubectl apply -f resources/rb.yaml

Once your SA is created, fetching its info will show you the name of the Secret in which its Token is stored.

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

Fetching info about that Secret will print out the Token string in the console. Be careful with this Token, as it’s your SA’s secret, used to access the Kubernetes API!

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

Configuring the Kubernetes SDK


Since we’re writing a NodeJS Bot in this blog post, we’ll use the JavaScript Kubernetes SDK for calling our Kubernetes API. You’ll notice, if you look at the examples in the Readme, that the SDK expects to be able to pull from a local kubectl configuration file (which, for example, is stored on a Mac at ~/.kube/config). While that might work for local development, that’s not ideal for Twelve Factor development, where we typically pass in our configurations as environment variables. To get around this, we can pass in a pair of configuration objects that mimic the contents of our local Kubernetes config file and can use those configuration objects to assume the identity of our newly created service account.

Let’s add some environment variables to the AppConfig class that we created in the previous post:

// in config/AppConfig.js
// inside the constructor block
// after previous environment variables

// whatever you’d like to name this cluster, any string will do
this.clusterName = process.env['CLUSTER_NAME'];
// the base URL of your cluster, where the API can be reached
this.clusterUrl = process.env['CLUSTER_URL'];
// the CA cert set up for your cluster, if applicable
this.clusterCert = process.env['CLUSTER_CERT'];
// the SA name from above - chatops-bot
this.kubernetesUserame = process.env['KUBERNETES_USERNAME'];
// the token value referenced in the screenshot above
this.kubernetesToken = process.env['KUBERNETES_TOKEN'];

// the rest of the file is unchanged…

These five lines will allow us to pass configuration values into our Kubernetes SDK, and configure a local client. To do that, we’ll create a new service called KubernetesService, which we’ll use to communicate with our K8s cluster:

// in services/kubernetes.js

import {KubeConfig, AppsV1Api, PatchUtils} from '@kubernetes/client-node';

export class KubernetesService {
    constructor(appConfig) {
        this.appClient = this._initAppClient(appConfig);
        this.requestOptions = { "headers": { "Content-type": 
PatchUtils.PATCH_FORMAT_JSON_PATCH}};
    }

    _initAppClient(appConfig) { /* we’ll fill this in soon */  }

    async takeAction(k8sCommand) { /* we’ll fill this in later */ }
}

This set of imports at the top gives us the objects and methods that we’ll need from the Kubernetes SDK to get up and running. The requestOptions property set on this constructor will be used when we send updates to the K8s API.

Now, let’s populate the contents of the _initAppClient method so that we can have an instance of the SDK ready to use in our class:

// inside the KubernetesService class
_initAppClient(appConfig) {
    // building objects from the env vars we pulled in
    const cluster = {
        name: appConfig.clusterName,
        server: appConfig.clusterUrl,
        caData: appConfig.clusterCert
    };
    const user = {
        name: appConfig.kubernetesUserame,
        token: appConfig.kubernetesToken,
    };
    // create a new config factory object
    const kc = new KubeConfig();
    // pass in our cluster and user objects
    kc.loadFromClusterAndUser(cluster, user);
    // return the client created by the factory object
    return kc.makeApiClient(AppsV1Api);
}

Simple enough. At this point, we have a Kubernetes API client ready to use, and stored in a class property so that public methods can leverage it in their internal logic. Let’s move on to wiring this into our route handler.

Message Ingestion and Validation


In a previous post, we took a look at the full payload of JSON that Webex sends to our Bot when a new message event is raised. It’s worth taking a look again, since this will indicate what we need to do in our next step:

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

If you look through this JSON, you’ll notice that nowhere does it list the actual content of the message that was sent; it simply gives event data. However, we can use the data.id field to call the Webex API and fetch that content, so that we can take action on it. To do so, we’ll create a new service called MessageIngestion, which will be responsible for pulling in messages and validating their content.

Fetching Message Content

We’ll start with a very simple constructor that pulls in the AppConfig to build out its properties, one simple method that calls a couple of stubbed-out private methods:

// in services/MessageIngestion.js
export class MessageIngestion {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    async determineCommand(event) {
        const message = await this._fetchMessage(event);
        return this._interpret(message);
     }

    async _fetchMessage(event) { /* we’ll fill this in next */ }

    _interpret(rawMessageText) { /* we’ll talk about this */ }
}

We’ve got a good start, so now it’s time to write our code for fetching the raw message text. We’ll call the same /messages endpoint that we used to create messages in the previous blog post, but in this case, we’ll fetch a specific message by its ID:

// in services/MessageIngestion.js
// inside the MessageIngestion class

// notice we’re using fetch, which requires NodeJS 17.5 or higher, and a runtime flag
// see previous post for more info
async _fetchMessage(event) {
    const res = await fetch("https://webexapis.com/v1/messages/" + 
event.data.id, {
        headers: {
            "Content-Type": "application/json",
            "Authorization": `Bearer ${this.botToken}`
        },
        method: "GET"
    });
    const messageData = await res.json();
    if(!messageData.text) {
        throw new Error("Could not fetch message content.");
    }
    return messageData.text;
}

If you console.log the messageData output from this fetch request, it will look something like this:

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

As you can see, the message content takes two forms – first in plain text (pointed out with a red arrow), and second in an HTML block. For our purposes, as you can see from the code block above, we’ll use the plain text content that doesn’t include any formatting.

Message Analysis and Validation

This is a complex topic to say the least, and the complexities are beyond the scope of this blog post.  There are a lot of ways to analyze the content of the message to determine user intent.  You could explore natural language processing (NLP), for which Cisco offers an open-source Python library called MindMeld. Or you could leverage OTS software like Amazon Lex.

In my code, I took the simple approach of static string analysis, with some rigid rules around the expected format of the message, e.g.:

<tagged-bot-name> scale <name-of-deployment> to <number-of-instances>

It’s not the most user-friendly approach, but it gets the job done for a blog post.

I have two intents available in my codebase – scaling a Deployment and updating a Deployment with a new image tag. A switch statement runs analysis on the message text to determine which of the actions is intended, and a default case throws an error that will be handled in the index route handler. Both have their own validation logic, which adds up to over sixty lines of string manipulation, so I won’t list all of it here. If you’re interested in reading through or leveraging my string manipulation code, it can be found on GitHub.

Analysis Output

The happy path output of the _interpret method is a new data transfer object (DTO) created in a new file:

// in dto/KubernetesCommand.js
export class KubernetesCommand {
    constructor(props = {}) {
        this.type = props.type;
        this.deploymentName = props.deploymentName;
        this.imageTag = props.imageTag;
        this.scaleTarget = props.scaleTarget;
    }
}

This standardizes the expected format of the analysis output, which can be anticipated by the various command handlers that we’ll add to our Kubernetes service.

Sending Commands to Kubernetes


For simplicity’s sake, we’ll focus on the scaling workflow instead of the two I’ve got coded. Suffice it to say, this is by no means scratching the surface of what’s possible with your Bot’s interactions with the Kubernetes API.

Creating a Webex Notification DTO

The first thing we’ll do is craft the shared DTO that will contain the output of our Kubernetes command methods. This will be passed into the WebexNotification service that we built in our last blog post and will standardize the expected fields for the methods in that service. It’s a very simple class:

// in dto/Notification.js
export class Notification {
    constructor(props = {}) {
        this.success = props.success;
        this.message = props.message;
    }
}

This is the object we’ll build when we return the results of our interactions with the Kubernetes SDK.

Handling Commands

Previously in this post, we stubbed out the public takeAction method in the Kubernetes Service. This is where we’ll determine what action is being requested, and then pass it to internal private methods. Since we’re only looking at the scale approach in this post, we’ll have two paths in this implementation. The code on GitHub has more.

// in services/Kuberetes.js
// inside the KubernetesService class
async takeAction(k8sCommand) {
    let result;
    switch (k8sCommand.type) {
        case "scale":
            result = await this._updateDeploymentScale(k8sCommand);
            break;
        default:
            throw new Error(`The action type ${k8sCommand.type} that was 
determined by the system is not supported.`);
    }
    return result;
}

Very straightforward – if a recognized command type is identified (in this case, just “scale”) an internal method is called and the results are returned. If not, an error is thrown.

Implementing our internal _updateDeploymentScale method requires very little code. However it leverages the K8s SDK, which, to say the least, isn’t very intuitive. The data payload that we create includes an operation (op) that we’ll perform on a Deployment configuration property (path), with a new value (value). The SDK’s patchNamespacedDeployment method is documented in the Typedocs linked from the SDK repo. Here’s my implementation:

// in services/Kubernetes.js
// inside the KubernetesService class
async _updateDeploymentScale(k8sCommand) {
    // craft a PATCH body with an updated replica count
    const patch = [
        {
            "op": "replace",
            "path":"/spec/replicas",
            "value": k8sCommand.scaleTarget
        }
    ];
    // call the K8s API with a PATCH request
    const res = await 
this.appClient.patchNamespacedDeployment(k8sCommand.deploymentName, 
"default", patch, undefined, undefined, undefined, undefined, 
this.requestOptions);
    // validate response and return an success object to the
    return this._validateScaleResponse(k8sCommand, res.body)
}

The method on the last line of that code block is responsible for crafting our response output.

// in services/Kubernetes.js
// inside the KubernetesService class
_validateScaleResponse(k8sCommand, template) {
    if (template.spec.replicas === k8sCommand.scaleTarget) {
        return new Notification({
            success: true,
            message: `Successfully scaled to ${k8sCommand.scaleTarget} 
instances on the ${k8sCommand.deploymentName} deployment`
        });
    } else {
        return new Notification({
            success: false,
            message: `The Kubernetes API returned a replica count of 
${template.spec.replicas}, which does not match the desired 
${k8sCommand.scaleTarget}`
        });
    }
}

Updating the Webex Notification Service


We’re almost at the end! We still have one service that needs to be updated. In our last blog post, we created a very simple method that sent a message to the Webex room where the Bot was called, based on a simple success or failure flag. Now that we’ve built a more complex Bot, we need more complex user feedback.

There are only two methods that we need to cover here. They could easily be compacted into one, but I prefer to keep them separate for granularity.

The public method that our route handler will call is sendNotification, which we’ll refactor as follows here:

// in services/WebexNotifications
// inside the WebexNotifications class
// notice that we’re adding the original event
// and the Notification object
async sendNotification(event, notification) {
    let message = `<@personEmail:${event.data.personEmail}>`;
    if (!notification.success) {
        message += ` Oh no! Something went wrong! 
${notification.message}`;
    } else {
        message += ` Nicely done! ${notification.message}`;
    }
    const req = this._buildRequest(event, message); // a new private 
message, defined below
    const res = await fetch(req);
    return res.json();
}

Finally, we’ll build the private _buildRequest method, which returns a Request object that can be sent to the fetch call in the method above:

// in services/WebexNotifications
// inside the WebexNotifications class
_buildRequest(event, message) {
    return new Request("https://webexapis.com/v1/messages/", {
        headers: this._setHeaders(),
        method: "POST",
        body: JSON.stringify({
            roomId: event.data.roomId,
            markdown: message
        })
    })
}

Tying Everything Together in the Route Handler


In previous posts, we used simple route handler logic in routes/index.js that first logged out the event data, and then went on to respond to a Webex user depending on their access. We’ll now take a different approach, which is to wire in our services. We’ll start with pulling in the services we’ve created so far, keeping in mind that this will all take place after the auth/authz middleware checks are run. Here is the full code of the refactored route handler, with changes taking place in the import statements, initializations, and handler logic.

// revised routes/index.js
import express from 'express'
import {AppConfig} from '../config/AppConfig.js';
import {WebexNotifications} from '../services/WebexNotifications.js';
// ADD OUR NEW SERVICES AND TYPES
import {MessageIngestion} from "../services/MessageIngestion.js";
import {KubernetesService} from '../services/Kubernetes.js';
import {Notification} from "../dto/Notification.js";

const router = express.Router();
const config = new AppConfig();
const webex = new WebexNotifications(config);
// INSTANIATE THE NEW SERVICES
const ingestion = new MessageIngestion(config);
const k8s = new KubernetesService(config);

// Our refactored route handler
router.post('/', async function(req, res) {
  const event = req.body;
  try {
    // message ingestion and analysis
    const command = await ingestion.determineCommand(event);
    // taking action based on the command, currently stubbed-out
    const notification = await k8s.takeAction(command);
    // respond to the user 
    const wbxOutput = await webex.sendNotification(event, notification);
    res.statusCode = 200;
    res.send(wbxOutput);
  } catch (e) {
    // respond to the user
    await webex.sendNotification(event, new Notification({success: false, 
message: e}));
    res.statusCode = 500;
    res.end('Something went terribly wrong!');
  }
}
export default router;

Testing It Out!


If your service is publicly available, or if it’s running locally and your tunnel is exposing it to the internet, go ahead and send a message to your Bot to test it out. Remember that our test Deployment was called nginx-deployment, and we started with two instances. Let’s scale to three:

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

That takes care of the happy path. Now let’s see what happens if our command fails validation:

Cisco ChatOps, Cisco Career, Cisco Learning. Cisco Careers, Cisco Prep, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Kubernetes

Success! From here, the possibilities are endless. Feel free to share all of your experiences leveraging ChatOps for managing your Kubernetes deployments in the comments section below.

Source: cisco.com

Friday, 20 May 2022

Want SASE? Just Add Software!

Twenty-first-century networking

It seems like a simple idea. All you want is to get the network to do what you intend it to. Nothing more, nothing less. But in today’s world, there are so many factors when it comes to networking: more users, more devices, security concerns, various domains, distributed applications, cloud, artificial intelligence (AI), 5G, IoT — the list goes on and on.

Cisco’s SD-WAN can help you. It transforms a legacy manual network into a software-defined overlay that helps both automate deployment and management and provides more intelligence with policies for path selection to improve user experience. Those policies are then applied consistently across the network, a network that now uses insights and automation to continuously monitor and adjust network performance to meet your business intent. Think of it as a continual feedback loop of incremental improvement.

Building upon the connectivity of SD-WAN, secure access service edge (SASE) is an architecture that combines connectivity and security. Coined by Gartner in 2019, SASE unifies SD-WAN networking and security services into a cloud-delivered architecture to provide access and security from edge to edge — including the data center, remote offices, roaming users, and beyond.

Is your wide area network underpinned by a 1000 Series ISR? Are you running 4000 Series ISRs? Do you have a few ASR 1000 Series units? Did you have a Cisco ONE license? Did you recently renew your Software Support Service (SWSS) on those devices? Consider this: the Cisco routing devices you currently have in your wide area network may already hold your ticket to entry into the world of SD-WAN and SASE.

You don’t need a forklift

“How can that be?” you may be wondering. The answer lies in the magic of software.

Think of it this way. In the past, if you wanted to upgrade the performance of a car, you had to swap out hard parts. Camshafts. Differentials. Transmissions. Engines.

Today, many cars just need a software update to the engine control module (ECM). Dinan for BMW. Cobb Tuning for Mitsubishi. And of course, Tesla and its downloadable software updates to unlock the high-performance “Ludicrous Mode.”

Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Tutorial and Materials
Figure 1. Tesla Driver Console

Not a car buff? Then how about mobile phones? Same hardware, but new Android or iOS software with added functionality. For example, the iPhone 6S came out in September 2015 running iOS 9. Six years and an equal number of major software releases later (iOS 15.2 was released on December 13, 2021), the iPhone 6S can be still upgraded to iOS 15.2.

Why shouldn’t it be the same for networking hardware? Upgrade the software and enjoy new functionality on your old hardware. Did you know that your Cisco routers are also software-based? This may enable you to migrate from traditional routing to SD-WAN with the hardware you have today. You may even have the Cisco DNA software entitlement already and not know it!

Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Tutorial and Materials
Figure 2. Cisco Router Families

Where the bytes meet the copper


You likely have some or all of the three product families shown above (the ISR 1000 Series, the ISR 4000 Series, and the ASR 1000 Series) supporting your traditional routing network. And they have undoubtedly been doing an exemplary job. But those devices are capable of so much more. In fact, these models can be upgraded to our latest software for routers: Cisco IOS XE SD-WAN. With this new software they can handle your changing traffic pattern: the tsunami of traffic headed to new cloud services and software-as-a-service (SaaS) applications in public clouds and the internet.

Cisco makes this upgrade easy with an SD-WAN conversion tool that greatly facilitates migrating from traditional routing to SD-WAN. This tool analyzes your current router configuration and automatically creates a new router configuration for SD-WAN. Not only does this save countless hours of work, but it also guarantees consistency in the configuration of each branch router. You can even automate the software installation with Cisco vManage zero-touch upgrading.

All it takes to unlock these nascent capabilities is Cisco DNA Software for SD-WAN and Routing. Three subscription tiers are available: Essentials, Advantage, and Premier. Each is aligned to the degree of enhancement network managers need in SD-WAN security, management, and automation. Every Cisco DNA Software for SD-WAN and Routing subscription also includes a perpetual license that covers all aspects of traditional routing, a license that never expires.

Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Preparation, Cisco Tutorial and Materials
Figure 3. Cisco Subscription Licensing for SD-WAN

For those of you looking to continue your journey with SD-WAN into the world of SASE, Cisco provides all the core building blocks of a SASE architecture and Cisco DNA Premier is your tier. Once in place, you can layer on Cisco Umbrella for security, Cisco Duo for zero-trust network access, and Cisco ThousandEyes for internet and cloud visibility. This combination of best-in-class networking, connectivity, security, and extended visibility capabilities helps you deliver an exceptional user experience across a distributed IT landscape. 

You don’t want to miss out!


If you recently upgraded your Cisco SWSS for your routers, you may not have noticed that Cisco DNA Essentials for SD-WAN and Routing are included. This means that initiating the jump into SD-WAN may be a no-cost endeavor for you. You really do owe it to yourself to at least explore the possibility of migrating over to SD-WAN to avail yourself of its benefits, especially if you already own the license to enjoy it.

And finally, don’t let that subscription lapse. The traditional routing perpetual license is nice to have, but there are two things you need to be aware of with that license. First, any network management you enjoy through Cisco DNA Center is contingent upon a valid Cisco DNA license. And second, you will lose the entitlement to use any SD-WAN functionality should the subscription license expire.

Source: cisco.com

Tuesday, 17 May 2022

Network Service Mesh Simplifies Multi-Cloud / Hybrid Cloud Communication

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

Kubernetes networking is, for the most part, intra-cluster. It enables communication between pods within a single cluster:

The most fundamental service Kubernetes networking provides is a flat L3 domain: Every pod can reach every other pod via IP, without NAT (Network Address Translation).

The flat L3 domain is the building block upon which more sophisticated communication services, like Service Mesh, are built:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials
Application Service Mesh architecture.

Fundamental to a service mesh’s capability to function is that the service mesh control plane can reach each of the proxies over a flat L3, and each of the proxies can reach each other over a flat L3.

This all “just works” within a single Kubernetes cluster, precisely because of the flat L3-ness of Kubernetes intra-cluster networking.

Multi-cluster communication


But what if you need workloads running in more than one cluster to communicate?

If you are lucky, all of your clusters share a common, flat L3. This may be true in an on-prem situation, but often is not. It will almost never be true in a multi-cloud/hybrid cloud situation.

Often the solution proposed involves maintaining a complicated set of L7 gateway servers:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

This architecture introduces a great deal of administrative complexity. The servers have to be federated together, connectivity between them must be established and maintained, and L7 static routes have to be kept up. As the number of clusters increases, this becomes increasingly challenging.

What if we could get a set of workloads, no matter where they are running, to share a common flat L3 domain:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

The green pods could reach each other over a flat L3 Domain.

The red pods could reach each other over a flat L3 Domain.

The red and green pod could reach both the green pods and the red pods in the green (and red respectively) flat L3 Domains.

This points the way to a solution to the problem of stretching a single service mesh with a single control plane across workloads running in different clusters/clouds/premises, etc.:

Cisco Exam Prep, Cisco Career, Cisco Skill, Cisco Learning, Cisco Jobs, Cisco Preparation, Cisco Certification, Cisco Materials

An instance of Istio could be run over the red vL3, and a separate Istio instance could be run over the green vL3.

Then the red pods are able to access the red Istio instance.

The green pods are able to access the green Istio instance.

The red/green pod can access both the red and the green Istio instances.

The same could be done with the service mesh of your choice (such as Linkerd, Consul, or Kuma).

Network Service Mesh benefits


Network Service Mesh itself does not provide traditional L7 Services. It provides the complementary service of flat L3 domain that individual workloads can connect to so that the traditional service mesh can do what it does *better* and more *easily* across a broader span.

Network Service Mesh also enables other beneficial and interesting patterns. It allows for multi-service mesh, the capability for a single pod to connect to more than one service mesh simultaneously.

And it allows for “multi-corp extra-net:” it is sometimes desirable for applications from multiple companies to communicate with one another on a common service mesh. Network Service Mesh has sophisticated identity federation and admissions policy features that enable one company to selectively admit the workloads from another into its service mesh.

Source: cisco.com

Monday, 16 May 2022

Get Ready to Crack Cisco 500-301 CCS Exam with 500-301 Practice Test

Cisco 500-301 CCS Exam Description:

The Cisco Cloud Collaboration Solutions (CCS) exam (500-301) is a 60-minute, 45-55 question assessment that tests a candidate's knowledge of the technical skills needed by a sales engineer to design and sell Cisco cloud collaboration solutions.

Cisco 500-301 Exam Overview:

  • Exam Name- Cisco Cloud Collaboration Solutions
  • Exam Number- 500-301 CCS
  • Exam Price- $300 USD
  • Duration- 60 minutes
  • Number of Questions- 45-55
  • Passing Score- Variable (750-850 / 1000 Approx.)
  • Recommended Training- Cisco SalesConnect
  • Exam Registration- PEARSON VUE
  • Sample Questions- Cisco 500-301 Sample Questions
  • Practice Exam- Cisco Video Collaboration Practice Test

Saturday, 14 May 2022

What is Container Scanning (And Why You Need It)

I want to share my experience using vulnerability scanners and other open-source projects for security. First, we need container scanning to make our app and solution secure and safe. The central concept of container scanning is to scan OS Packages and programming language dependencies. Security scanning helps to detect common vulnerabilities and exposures (CVE). The modern proactive security approach provides integration container scanning in CI/CD pipelines. This approach helps detect and fix vulnerabilities in code, containers, and IaC conf files before release or deployment.

How does it work?

Scanners pull the image from the docker registry and try to analyze each layer. After the first running, scanners will download their vulnerability database.  Then each time after running, the community (security specialist, vendors, etc.) identifies, defines, and adds publicly disclosed cybersecurity vulnerabilities to the catalog. We need to consider that sometimes when you run some scanners on your server or laptop, scanners can take some time to update their database.  

Usually, scanners and other security tools use multiple resources for their database: 

◉ Internal database 

◉ National Vulnerability Database (NVD) 

◉ Sonatype OSS Index 

◉ GitHub Advisories 

◉ Scanners also can be configured to incorporate external data sources (e.g., https://search.maven.org/ )

As a result, we see the output with a list of vulnerabilities, name of components or libraries, Vulnerability ID, Severity level (Unknown, Negligible, Low, Medium, High), and Software Bill of Materials (SBOM) format. Using output, we can see or write in a file in which package version vulnerabilities were fixed. This information can help change/update packages or base the image on the secure one. 

Comparing Trivy and Grype

I chose to compare two different open source vulnerability scanners. Trivy and Grype are comprehensive scanners for vulnerabilities in container images, file systems, and GIT repositories. For the scanning and analytics, I chose the Debian image, as it’s more stable for production (greetings to alpine).  

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of the Grype output

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of the Trivy output

A couple advantages of Trivy is that 1) it can scan Terraform conf files, and 2) it’s output format (by default as a table output) is better due to colored output and table cells abstract with link to total vulnerabilities description.

Both projects can write output in JSON and XML using templates. This is beneficial in integrating scanners in CI/CD, or using the report for another custom workflow. However, information from Trivy looks more informative due to the vulnerability abstract and extra links with descriptions.

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
Part of Trivy output JSON

Additional features


◉ You can scan private images and ​self-hosted container registries.

◉ Filtering vulnerabilities is a feature for both projects. Filtering can help highlight critical issues or find specific vulnerabilities by ID. In the latest case where many security specialists, DevOps searching CVE-2021–44228 (Log4j) connected with a common Java logging library, that will also be reused in many other projects.

◉ You can integrate vulnerabilities scanners in Kubernetes

◉ Trivy kubectl plugin allows scan images running in a Kubernetes pod or deployment.

KubeClarity


There is a tool for detection and management of Software Bill Of Materials (SBOM) and vulnerabilities called KubeClarity. It scans both runtime K8s clusters and CI/CD pipelines for enhanced software supply chain security.

KubeClarity vulnerability scanner integrates with the scanners Grype (that we observed above) and Dependency-Track.

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
KubeClarity Dashboard

Cisco, Cisco Exam Prep, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam
KubeClarity Dashboard

Based on my experience, I saw these advantages in KubeClarity:

◉ Useful Graphical User Interface
◉ Filtering features capabilities:
    ◉ Packages by license type
    ◉ Packages by name, version, language, application resources
    ◉ Severity by level (Unknown, Negligible, Low, Medium, High)
    ◉ Fix Version

Source: cisco.com