Sunday, 1 May 2022

ChatOps: How to Secure Your Webex Bot

This is the second blog in our series about writing software for ChatOps. In the first post of this ChatOps series, we built a Webex bot that received and logged messages to its running console. In this post, we’ll walk through how to secure your Webex bot with authentication and authorization. Securing a Webex bot in this way will allow us to feel more confident in our deployment as we move on to adding more complex features.

[Access the complete code for this post on GitHub here.]

Very important: This post picks up right where the first blog in this ChatOps series left off. Be sure to read the first post of our ChatOps series to learn how to make your local development environment publicly accessible so that Webex webhook events can reach your API. Make sure your tunnel is up and running and webhook events can flow through to your API successfully before proceeding on to the next section. From here on out, this post assumes that you’ve taken those steps and have a successful end-to-end data flow. [You can find the code from the first post on how to build a Webex bot here.]

How to secure your Webex bot with an authentication check

Webex employs HMAC-SHA1 encryption based on a secret key that you can provide, to add security to your service endpoint. For the purposes of this blog post, I’ll include that in the web service code as an Express middleware function, which will be applied to all routes. This way, it will be checked before any other route handler is called. In your environment, you might add this to your API gateway (or whatever is powering your environment’s ingress, e.g. Nginx, or an OPA policy).

How to add a secret to the Webhook

Use your preferred tool to generate a random, unique, and complex string. Make sure that it is long and complex enough to be difficult to guess. There are plenty of tools available to create a key. Since I’m on a Mac, I used the following command:

$ cat /dev/urandom | base64 | tr -dc '0-9a-zA-Z' | head -c30

The resulting string was printed into my Shell window. Be sure to hold onto it. You’ll use it in a few places in the next few steps.

Now you can use that string to update your Webhook with a PUT request. You can also add it to a new Webhook if you’d like to DELETE your old one:

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Webex will now send an additional header with each notification request under the header key x-spark-spark-signature. The header value will be a one-way encryption of the POST body, done with the secret value that you provided. On the server side, we can attempt the same one-way encryption. If the API client sending the POST request (ideally Webex) used the same encryption secret that we used, then our resulting string should match the x-spark-spark-signature header value.

How to add an application configuration

Now that things are starting to get more complex, let’s build out an application along the lines of what we can expect to see in the real world. First, we create a simple (but extensible) AppConfig class in config/appConfig.js. We’ll use this to pull in environment variables and then reference those values in other parts of our code. For now, it’ll just include the three variables needed to power authentication:

◉ the secret that we added to our Webhook
◉ the header key where we’ll examine the encrypted value of the POST body
◉ the encryption algorithm used, which in this case is ”sha1”

Here’s the code for the AppConfig class, which we’ll add as our code gets more complex:

// in config/appConfig.js
import process from 'process';

export class AppConfig {
    constructor() {
        this.encryptionSecret = process.env['WEBEX_ENCRYPTION_SECRET'];
        this.encryptionAlgorithm = process.env['WEBEX_ENCRYPTION_ALGO'];
        this.encryptionHeader = process.env['WEBEX_ENCRYPTION_HEADER'];
    }
}

Super important: Be sure to populate these environment variables in your development environment. Skipping this step can lead to a few minutes of frustration before remembering to populate these values.

Now we can create an Auth service class that will expose a method to run our encrypted string comparison:

// in services/Auth.js
import crypto from "crypto";

export class Auth {
    constructor(appConfig) {
        this.encryptionSecret = appConfig.encryptionSecret;
        this.encryptionAlgorithm = appConfig.encryptionAlgorithm;
    }

    isProperlyEncrypted(signedValue, messsageBody) {
        // create an encryption stream
        const hmac = crypto.createHmac(this.encryptionAlgorithm, 
this.encryptionSecret);
        // write the POST body into the encryption stream
        hmac.write(JSON.stringify(messsageBody));
        // close the stream to make its resulting string readable
        hmac.end();
        // read the encrypted value
        const hash = hmac.read().toString('hex');
        // compare the freshly encrypted value to the POST header value,
        // and return the result
        return hash === signedValue;
    }
}

Pretty straightforward, right? Now we need to leverage this method in a router middleware that will check all incoming requests for authentication. If the authentication check doesn’t pass, the service will return a 401 and respond immediately. I do this in a new file called routes/auth.js:

// in routes/auth.js
import express from 'express'
import {AppConfig} from '../config/AppConfig.js';
import {Auth} from "../services/Auth.js";

const router = express.Router();
const config = new AppConfig();
const auth = new Auth(config);

router.all('/*', async (req, res, next) => {
    // a convenience reference to the POST body
    const messageBody = req.body;
    // a convenience reference to the encrypted string, with a fallback if the value isn’t set
    const signedValue = req.headers[config.encryptionHeader] || "";
    // call the authentication check
    const isProperlyEncrypted = auth.isProperlyEncrypted(signedValue, messageBody);
    if(!isProperlyEncrypted) {
        res.statusCode = 401;
        res.send("Access denied");
    }

    next();
});

export default router;

All that’s left to do is to add this router into the Express application, just before the handler that we defined earlier. Failing the authentication check will end the request’s flow through the service logic before it ever gets to any other route handlers. If the check does pass, then the request can continue on to the next route handler:

// in app.js

import express from 'express';
import logger from 'morgan';
// ***ADD THE AUTH ROUTER IMPORT***
import authRouter from './routes/auth.js';
import indexRouter from './routes/index.js';

// skipping some of the boilerplate…

// ***ADD THE AUTH ROUTER TO THE APP***
app.use(authRouter);

app.use('/', indexRouter);

// the rest of the file stays the same…

Now if you run your server again, you can test out your authentication check. You can try with just a simple POST from a local cURL or Postman request. Here’s a cURL command that I used to test it against my local service:

$ curl --location --request POST 'localhost:3000' \
--header 'x-spark-signature: incorrect-value' \
--header 'Content-Type: application/json' \
--data-raw '{
    "key": "value"
}'

Running that same request in Postman produces the following output:

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Now, if you send a message to your bot through Webex, you should see the Webhook event flow through your authentication check and into the route handler that we created in the first post.

How to add optional authorization


At this point, we can rest assured that any request that comes through came from Webex. But that doesn’t mean we’re done with security! We might want to restrict which users in Webex can call our bot by mentioning it in a Webex Room. If that’s the case, we need to add an authorization check as well.

How to check against a list of authorized users

Webex sends user information with each event notification, indicating the Webex user ID and the corresponding email address of the person who triggered the event (an example is displayed in the first post in this series). In the case of a message creation event, this is the person who wrote the message about which our web service is notified. There are dozens of ways to check for authorization – AD groups, AWS Cognito integrations, etc.

For simplicity’s sake, in this demo service, I’m just using a hard-coded list of approved email addresses that I’ve added to the Auth service constructor, and a simple public method to check the email address that Webex provided in the POST body against that hard-coded list. Other, more complicated modes of authz checks are beyond the scope of this post.

// in services/Auth.js
export class Auth {
    constructor(appConfig) {
        this.encryptionSecret = appConfig.encryptionSecret;
        this.encryptionAlgorithm = appConfig.encryptionAlgorithm;
        // ADDING AUTHORIZED USERS
        this.authorizedUsers = [
           "colacy@cisco.com" // hey, that’s me!
        ];
    }

    // ADDING AUTHZ CHECK METHOD
    isUserAuthorized(messageBody) {
        return this.authorizedUsers.indexOf(messageBody.data.personEmail) 
!== -1
    }
// the rest of the class is unchanged

Just like with the authentication check, we need to add this to our routes/auth.js handler. We’ll add this between the authentication check and the next() call that completes the route handler.

// in routes/auth.js
// …

    const isProperlyEncrypted = auth.isProperlyEncrypted(signedValue, messageBody);
    if(!isProperlyEncrypted) {
        res.statusCode = 401;
        res.send("Access denied");
        return;
    }

    // ADD THE AUTHORIZATION CHECK
    const isAuthorized = auth.isUserAuthorized(messageBody);
    if(!isAuthorized) {
        res.statusCode = 403;
        res.send("Unauthorized");
        return;
    }

    next();
// …

If the sender’s email address isn’t in that list, the bot will send a 403 back to the API client with a message that the user was unauthorized. But that doesn’t really let the user know what went wrong, does it?

User Feedback


If the user is unauthorized, we should let them know so that they aren’t under the incorrect assumption that their request was successful — or worse, wondering why nothing happened. In this situation, the only way to provide the user with that feedback is to respond in the Webex Room where they posted their message to the bot.

Creating messages on Webex is done with POST requests to the Webex API. [The documentation and the data schema can be found here.] Remember, the bot authenticates with the access token that was provided back when we created it in the first post. We’ll need to pass that in as a new environment variable into our AppConfig class:

// in config/AppConfig.js
export class AppConfig {
    constructor() {
        // ADD THE BOT'S TOKEN
        this.botToken = process.env['WEBEX_BOT_TOKEN'];
        this.encryptionSecret = process.env['WEBEX_ENCRYPTION_SECRET'];
        this.encryptionAlgorithm = process.env['WEBEX_ENCRYPTION_ALGO'];
        this.encryptionHeader = process.env['WEBEX_ENCRYPTION_HEADER'];
    }
}

Now we can start a new service class, WebexNotifications, in a new file called services/WebexNotifications.js, which will notify our users of what’s happening in the backend.

// in services/WebexNotifications.js

export class WebexNotifications {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    // new methods to go here

}

This class is pretty sparse. For the purposes of this demo, we’ll keep it that way. We just need to give our users feedback based on whether or not their request was successful. That can be done with a single method, implemented in our two routers; one to indicate authorization failures and the other to indicate successful end-to-end messaging.

A note on the code below: To stay future-proof, Iʼm using the NodeJS version 17.7, which has fetch enabled using the execution flag –experimental-fetch. If you have an older version of NodeJS, you can use a third-party HTTP request library, like axios, and use that in place of any lines where you see fetch used.

Weʼll start by implementing the sendNotification method, which will take the same messageBody object that weʼre using for our auth checks:

// in services/WebexNotifications.js…
// inside the WebexNotifications.js class

   async sendNotification(messageBody, success=false) {
       // we'll start a response by tagging the person who created the message
       let responseToUser = 
`<@personEmail:${messageBody.data.personEmail}>`;
       // determine if the notification is being sent due to a successful or failed authz check
       if (success === false) {
           responseToUser += ` Uh oh! You're not authorized to make requests.`;
       } else {
           responseToUser += ` Thanks for your message!`;
       }
       // send a message creation request on behalf of the bot
       const res = await fetch("https://webexapis.com/v1/messages", {
           headers: {
               "Content-Type": "application/json",
               "Authorization": `Bearer ${this.botToken}`
           },
           method: "POST",
           body: JSON.stringify({
               roomId: messageBody.data.roomId,
               markdown: responseToUser
           })
       });
       return res.json();
   }

Now it’s just a matter of calling this method from within our route handlers. In routes/auth.js we’ll call it in the event of an authorization failure:

// in routes/auth.js

import express from 'express'
import {AppConfig} from '../config/AppConfig.js';
import {Auth} from "../services/Auth.js";
// ADD THE WEBEXNOTIFICATIONS IMPORT
import {WebexNotifications} from '../services/WebexNotifications.js';

// …

const auth = new Auth(config);
// ADD CLASS INSTANTIATION
const webex = new WebexNotifications(config);

// ...

    if(!isAuthorized) {
        res.statusCode = 403;
        res.send("Unauthorized");
        // ADD THE FAILURE NOTIFICATION
        await webex.sendNotification(messageBody, false);
        return;
    }
// ...

Similarly, we’ll add the success version of this method call to routes/index.js. Here’s the final version of routes/index.js once we’ve added a few more lines like we did in the auth route:

// in routes/index.js

import express from 'express'
// Add the AppConfig import
import {AppConfig} from '../config/AppConfig.js';
// Add the WebexNotifications import
import {WebexNotifications} from '../services/WebexNotifications.js';

const router = express.Router();
// instantiate the AppConfig
const config = new AppConfig();
// instantiate the WebexNotification class, passing in our app config
const webex = new WebexNotifications(config);

router.post('/', async function(req, res) {
  console.log(`Received a POST`, req.body);
  res.statusCode = 201;
  res.end();
  await webex.sendNotification(req.body, true);
});

export default router;

To test this out, I’ll simply comment out my own email address from the approved list and then send a message to my bot. As you can see from the screenshot below, Webex will display the notification from the code above, indicating that I’m not allowed to make the request.

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

If I un-comment my email address, the request goes through successfully.

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Source: cisco.com

Saturday, 30 April 2022

ChatOps: How to Build Your First Webex Bot

In this post, you’ll learn how to create a Webex bot, register a Webhook in Webex, and configure your bot to listen to Webhook – all with plenty of code examples. Check back for more as we build new use cases that leverage different aspects of automation using chat-driven interfaces.

In the DevOps world, we’re always looking for new ways to drive automation around communication. When we deploy new code, scale our deployments, or manage our feature flags – we want our teams to know about it. The Webex API makes it easy to build announcement flows triggered by successful events in our infrastructure. However, if we can trigger those events from Webex as well, then we’ve entered the world of ChatOps.

More Info: 300-835: Automating Cisco Collaboration Solutions (CLAUTO)

ChatOps is the use of chat clients like Webex Teams, chatbots, and real-time communication tools to facilitate how software development and operation tasks are communicated and executed. Using Webex APIs, we can build bots that allow us to enter commands that manage our infrastructure, trigger approval workflows, deploy code, and much more.

Security Disclaimer

Security is a top concern here at Cisco. In normal application development, security should always be built into the initial steps of getting code up and running. Today, we’re going to keep it simple and focus on the basics. Then, we’ll cover how to authenticate and authorize Webhook requests. We’ll hold off on security until the next blog post in our ChatOps series, once we’ve proven an end-to-end connection. 

How to create a Webex bot

First, let’s create a Webex bot using the Webex Developer UI.

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Webex for Developers has a great step-by-step guide here to help you get up and running.

Some important things to consider:

◉ Think about what you want to name your bot. It should be intuitive, but unique. Depending on how you set up your Webhook, you may be typing the bot’s name a lot, so take that into account.

◉ The secret token that’s auto-generated for your bot is used for authenticating with the Webex API. When you use this token, Webex will treat your bot like a real user who can create messages, join rooms, or be tagged by other users.

◉ Will this bot interact with a lot of people? Will it have a very public presence, or will it only communicate with a few users? The answer to that question may have an impact on how you want to name it, what icon you select, etc.

Once you’ve taken all of that into account and filled out the bot creation form, you should see something like this, which includes the all-important access token:

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

How to receive Webhook Events locally


Next, you’ll need to host your bot where it can be accessed by Webex via API calls. If you’re developing locally and want to run a server that’s accessible to the internet, the Webex guide recommends localtunnel.me or ngrok. I went with localtunnel.me for my local environment.

$ npm i -g localtunnel
$ lt --port 3000

The resulting output is the public domain name that you can use to tunnel through to a local port on your machine:

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Note: If you’re having trouble running localtunnel via the command line after installing (as a few people have reported here), make sure your PATH includes the directory where NPM installs your binaries. For example, on a Mac, that’s /usr/local/bin. This command might help:

$ npm config set prefix /usr/local
$ npm i -g localtunnel
$ lt --port 3000

How to register a Webhook


Once your internet-accessible endpoint has been set up, you now have a domain that you can use to register a Webex Webhook. Your Webex Webhook will listen to specific events that take place within the Webex platform and notify your web service via HTTP POST requests.

There are multiple ways to register a webhook. Under the hood, however, they all boil down to making your own HTTP POST request. I’ve posted a Postman collection that you can use to make this process a little easier. Fill in your own environment’s variables as you go and include the access token used in the header.

This is what my Postman request looks like:

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Feel free to use whatever technology you like, including good old-fashion CURL:

curl --location --request POST 'https://webexapis.com/v1/webhooks' \
--header 'Authorization: Bearer $BOT_TOKEN \
--header 'Content-Type: application/json' \
--data-raw '{
    "name": "simple-webhook",
    "targetUrl": "https://tidy-falcon-64.loca.lt",
    "resource": "messages",
    "event": "created",
    "filter": "mentionedPeople=me"
}'
What’s important to note, is that Webex will send notifications to the domain that you specify in your POST request. If you’re using a tunnel into your local environment, list the domain that was given to you when you activated your proxy.

A very impactful part of your Webhook will be the filter property. This determines which Webex events are sent to your bot as notifications (and which are filtered out). To keep things simple, my bot is only notified when users send a message that specifically mentions it in a Webex Teams Room:

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Webex has a nice, convenient tag for this: me uses the authorization token from the request to determine the identity of the user making that request (in this case, our bot), and applies that identity wherever it sees me referenced.

Alternatively, you can set a filter that only triggers notifications for direct messages to your bot, as opposed to mentions in Webex rooms. Since the goal of this post is to broaden visibility into the various processes, these examples show interactions in a Webex Teams Room, however, both are equally viable options.

When you send your POST request, Webex will respond with a body that contains an ID for your Webhook. While you can use the Webex API to GET a list of your Webhooks, it might be a good idea to hold onto this, in case you want to quickly update or delete this Webhook in the future. The Postman collection linked above stores the most recently created Webhook ID in an active_webhook environment variable automatically, which then powers the DELETE call in that collection.

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

How to create your bot server


For simple use cases, you may want to use the Webex Node Bot Framework, which is great for quick implementation. In order to get more familiar with the different components involved in this series, we’ll start from scratch, diving into the step that powers your Webex bot.

Getting Started with Express

Let’s set up a web server that can listen for POST requests from the Webex Webhook that we’ll create in a minute. This doesn’t have to be complicated for now, just something to demonstrate that we’re able to receive requests. For simplicity, we can use the ExpressJS generator, but you can use any web framework or technology that you like.

$ npm i -g express-generator
$ cd where/you/want/your/project
$ express

Since my IDE handles JavaScript Modules a lot better than it handles require statements, I opted to go with a more modern approach for my dependency management. This is totally optional and has no bearing on how you set up your code. However, if you want to follow the code snippets as I’ve laid them out, you’ll want to do the same. The first step is to add the following key/value pair to your package.json file, anywhere in the root of the JSON object:

"type": "module",

A lot of the boilerplate code can be stripped out if you like – we won’t need a favicon, a public/ folder, or a users route handler. Here’s what my code looked like after I stripped a lot of the simple stuff out:

// in app.js

// notice that I changed the require statements to use JS modules import statements
import express from 'express';
import logger from 'morgan';
import indexRouter from './routes/index.js';

const app = express();
app.use(logger('dev'));
app.use(express.json());
app.use(express.urlencoded({ extended: false }));

app.use('/', indexRouter);

// boilerplate error code didn’t change
// …

// **be sure to remember to set app as the default export at the end of the file**
export default app;

Since I’m using JS Modules, I also had to change the executed file in an Express app www/bin to www/bin.js, and revise the boilerplate require statements there as well to use import syntax:

// in www/bin.js

/**
* Module dependencies.
*/

import app from '../app.js';
import _debugger from 'debug';
const debug = _debugger('chatops-webhook:server');
import http from 'http';

// nothing else in this file needed to change

Adding a Route Handler

That takes care of the majority of the boilerplate. At this point, I only have four files in my codebase, despite how many Express gives me out of the box:

◉ app.js
◉ package.json
◉ bin/www.js
◉ routes/index.js

We’ll want to add a route handler that lets us know when we’ve received a POST request from our Webex Webhook. It can be a simple function that prints the request body to the application console – nothing complicated, just a few lines of code:

// in routes/index.js

import express from 'express'

const router = express.Router();

router.post('/', async function(req, res) {
  console.log(`Received a POST`, req.body);
  res.statusCode = 201;
  res.end();
});

export default router;

Give it a try


You now have all of the important components for receiving message notifications from Webex:

◉ A bot to act as an identity for your Webex interactions
◉ If applicable, a network tunnel to expose your local web service to the public internet
◉ A Webhook set up by your bot to receive Webex notifications
◉ A web service to receive Webex notifications on a POST endpoint

Let’s test it out! To keep things simple for now, create a new room in Webex Teams and add your bot as a member. Next, start typing your message, mentioning your Bot (you can use the @ symbol or type its name) as part of the text. When you hit enter, after a brief pause, you should see a request come through to your running web service, which should log the POST body that it received in its console output:

Cisco ChatOps, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Guides, Cisco Preparation Exam

Congratulations, you’ve just set up your very own Webex bot!

What’s next


As promised, our next post will walk through the extremely important aspect of securing our bot. We’ll make sure that only Webex can access it and only authorized users can trigger automation. After that, we’ll move on to new and exciting ways that you can automate everyday workflows right from a Webex Teams Room!

Learn, train, and certify in Cisco Collaboration

As you make your way through this ChatOps series, consider validating your skills with a Cisco Certification.

The 300-835 CLAUTO: Automating and Programming Cisco Collaboration Solutions is a 90-minute exam that counts toward three certifications — the CCNP Collaboration, Cisco Certified DevNet Professional, and Cisco Certified DevNet Specialist – Collaboration Automation and Programmability certifications. Check out the CLAUTO exam topics, and you’ll find that 25% of the exam covers Cloud Collaboration technologies. Before we meet again, take some time to browse through the free CLAUTO Study Materials available on the Cisco Learning Network, which will help you solidify today’s ChatOps focus on building your first Webex bot.

Source: cisco.com

Thursday, 28 April 2022

Cisco vAnalytics: Simplifying Your Network Operations

Change is the only constant in life” – this famous quote by the Greek philosopher Heraclitus is often used in a positive light. Try saying this to a network administrator, however, who constantly has to deal with changes in the network environment, and he will likely frown!

Cisco vAnalytics within an SD-WAN network

Over the past few years, SD-WAN has evolved to securely connect the hybrid workforce of an organization to applications deployed across multiple clouds and data centers. Typically, SD-WAN is built over a variety of transport paths, and it implements application-aware traffic routing to connect users to applications via the optimal transport path. However, there are many moving parts in these underlying transport paths that organizations do not control, and they are often in constant flux. Hence, organizations seek analytics solutions that offer greater visibility into their networks and provide insights that help these organizations take proactive measures to improve application delivery. Cisco vAnalytics is a cloud-hosted SaaS service that aggregates a large volume of telemetry data gathered from various vantage points within an SD-WAN network and produces insights that are otherwise hard to discern from raw monitoring data.   

Cisco launched a new version of its vAnalytics service, and here are its key benefits:

Enhanced Visibility

◉ Quickly assess your overall network and applications health – get a pulse on the quality of application experience and the uptime of your WAN circuits and sites.

◉ Get a perspective into the long-term historical behavior of your application and network performance metrics in order to establish benchmarks and detect deviations.

◉ Compare the performance of your applications and understand ongoing trends such as a drop in the quality of application experience (QoE) and a rise in application usage or latency.

Cisco vAnalytics, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Preparation Exam
Figure 1. Summary Dashboard showing an aggregated network and application health while drawing attention to problem areas

Faster Diagnosis with Actionable Insights


◉ Quickly detect applications experiencing problems and the magnitude of these problems to assess their overall impact.

◉ Get a multi-dimensional 360-degree view of an application experience alongside its associated network health—both at the aggregate and individual site level—to quickly isolate problem areas and reduce mean time to resolution (MTTR).

◉ Identify if your application or network issues are local to a site or a region, and accordingly narrow your target area to reduce mean time to identification (MTTI).

Cisco vAnalytics, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Preparation Exam
Figure 2. Application behavior over time with the correlated application-layer and network-layer performance indicators

Proactive Analytics


◉ Facilitate the exchange of telemetry data between Cisco SD-WAN and Microsoft 365 to optimize the delivery of Microsoft productivity applications using Cisco Cloud OnRamp for SaaS capabilities.

◉ Assess the quality of application experience across different application classes and adjust your application-aware routing policies to achieve optimal delivery.

◉ Schedule periodic reports for offline review and analysis in order to further optimize your network.

Cisco vAnalytics, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Preparation Exam
Figure 3. Application Dashboard with rich insights on application behavior, usage, trends, and distribution by application classes

You do not need to dive into deep waters to discover these nuggets – all the information listed above is available in just a few clicks through intuitive workflows built over a state-of-the-art graphical interface. Furthermore, this analysis can be exported as a password-protected pdf report.

Cisco vAnalytics, Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Job, Cisco Preparation, Cisco Preparation Exam
Figure 4. A Sample Site Summary Report

Source: cisco.com

Tuesday, 26 April 2022

How To Do DevSecOps for Kubernetes

In this article, we’ll provide an overview of security concerns related to Kubernetes, looking at the built-in security capabilities that Kubernetes brings to the table.

Kubernetes at the center of cloud-native software

Since Docker popularized containers, most non-legacy large-scale systems use containers as their unit of deployment, in both the cloud and private data centers. When dealing with more than a few containers, you need an orchestration platform for them. For now, Kubernetes is winning the container orchestration wars. Kubernetes runs anywhere and on any device—cloud, bare metal, edge, locally on your laptop or Raspberry Pi. Kubernetes boasts a huge and thriving community and ecosystem. If you’re responsible for managing systems with lots of containers, you’re probably using Kubernetes.

The Kubernetes security model

When running an application on Kubernetes, you need to ensure your environment is secure. The Kubernetes security model embraces a defense in depth approach and is structured in four layers, known as the 4Cs of Cloud-Native Security:

Read More: 350-801: Implementing Cisco Collaboration Core Technologies (CLCOR)

1. Cloud (or co-located servers or the corporate datacenter)

2. Container

3. Cluster

4. Code

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Security at outer layers establishes a base for protecting inner layers. The Kubernetes documentation reminds us that “You cannot safeguard against poor security standards in the base layers by addressing security at the Code level.”

At the Cloud layer, security best practices are expected of cloud providers and their infrastructure. Working inward to the Cluster layer, cluster components need to be properly secured, as do applications running in the cluster.

At the Container level, security involves vulnerability scanning and image signing, as well as establishing proper container user permissions.

Finally, at the innermost layer, application code needs to be designed and built with security in mind. This is true whether the application runs in Kubernetes or not.

In addition to the 4 C’s, there are the 3 A’s: authentication, authorization, and admission. These measures apply at the Cluster layer. Secure systems provide resource access to authenticated entities that are authorized to perform certain actions.

Authentication


Kubernetes supports two types of entities: users (human users) and service accounts (machine users, software agents). Entities can authenticate against the API server in various ways that fit different use cases:

◉ X509 client certificates
◉ Static tokens
◉ Bearer tokens
◉ Bootstrap tokens
◉ Service account tokens
◉ OpenID Connect tokens

You can even extend the authentication process with custom workflows via webhook authentication.

Authorization


Once a request is authenticated, it goes through an authorization workflow which decides if the request should be granted.

The main authorization mechanism is role-based access control (RBAC). Each authenticated request has an HTTP verb like GET, POST, or DELETE, and authenticated entities have a role that allows or denies the request. Other authorization mechanisms include attribute-based access control (ABAC), node authorization, and webhook mode.

Admission


Admission control is a security measure that sets Kubernetes apart from other systems. When a request is authorized, it still needs to go through another set of filters. For example, an authorized request may be rejected by an admission controller due to quotas or due to other requests at a higher priority. In addition to validation, admission webhooks can also mutate incoming requests as a way of processing request objects for use before reaching the Kubernetes API server.

In the context of security, pod security admission might add an audit notation or prevent the scheduling of a pod.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Secrets management


Secrets are an important part of secure systems. Kubernetes provides a full-fledged abstraction and robust implementation for secrets management. Secrets are stored in etcd—Kubernetes’ state store—which can store credentials, tokens, SSH keys, and any other sensitive data. It is recommended to store small, sensitive data only as Kubernetes Secrets.

Data encryption


When you want to store a large amount of data, consider using dedicated data stores like relational databases, graph databases, persistent queues, and key-value stores. From the vantage point of security, It’s important to keep your data encrypted both at rest (when it is simply sitting in storage) as well as in transit (when it is sent across the wire). While data encryption is not unique to Kubernetes, the concept must be applied when configuring storage volumes for Kubernetes.

Encryption at rest


There are two approaches to encryption at rest. The first approach uses a data store that encrypts the data for you transparently. The other approach makes the application responsible for encryption, then storing the already-encrypted data in any data store.

Encryption in transit


Eventually, you’ll need to send your data for processing. Because the data is often (necessarily) decrypted at this point, it should be sent over a secure channel. Using  HTTPS, STCP, or SFTP for secure transit of data is best practice.

Kubernetes services can be configured with specific ports like 443 for HTTPS.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Managing container images securely


Kubernetes orchestrates your containers. These containers are deployed as images. Many Kubernetes-based systems take advantage of third-party images from the rich Kubernetes ecosystem. If an image contains vulnerabilities, your system is at risk.

There are two primary measures to safeguard your system. First, use trusted image registries, such as Google Container Registry, AWS Elastic Container Registry, or Azure Container Registry. Alternatively, you may run your own image registry using an open-source project like Harbor and curate exactly which trusted images you allow.

The other measure is to frequently scan images for vulnerabilities as part of the CI/CD process.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Defining security policies


Kubernetes and its ecosystem provide several ways to define security policies to protect your systems. Note that the built-in Kubernetes PodSecurityPolicy resource is deprecated and will be removed in Kubernetes 1.25. At the time of this writing, the Kubernetes community is working on a lightweight replacement. However, the current recommendation is to use a robust third-party project—for example, Gatekeeper, Kyverno, or K-Rail—as a policy controller.

Policies can be used for auditing purposes, to reject pod creation, or to mutate the pod and limit what it can do. By default, pods can receive traffic from any source and send traffic to any destination. Network policies allow you to define the ingress and egress of your pods. The network policy typically translates to firewall rules.

Resource quotas are another type of policy, and they’re particularly useful when multiple teams share the same cluster using different namespaces. You can define a resource quota per namespace and ensure that teams don’t try to provision too many resources. This is also important for security purposes, such as if an attacker gains access to a namespace and tries to provision resources (to perform crypto mining, for example).

Monitoring, alerting, and auditing


We have mostly discussed preventative measures thus far. However, a crucial part of security operations is detecting and responding to security issues. Unusual activity could be a sign that an attack is in progress or that a service is experiencing degraded performance. Note that security issues often overlap with operational issues. For example, an attacker downloading large amounts of sensitive data can cause other legitimate queries to time out or be throttled.

You should monitor your system using standard observability mechanisms like logging, metrics, and tracing. Kubernetes provides built-in logging and metrics for its own components. Once a serious problem is discovered, alerts should be raised to the relevant stakeholders. Prometheus can provide metrics monitoring and alerting, while Grafana provides dashboards and visualizations for those metrics. These tools, along with AppDynamics or countless others, can serve as effective Kubernetes monitoring solutions.

When investigating an incident, you can use the Kubernetes audit logs to check who performed what action at a particular time.

Source: cisco.com

Monday, 25 April 2022

How Well-integrated Tech Can Boost Your Organization’s Security

What Did We Find?

There was one main question we sought to answer around this key practice: Why would an organization want to integrate its security technologies with the rest of its IT architecture? Unsurprisingly, the main reason was to improve the efficiency of monitoring and auditing.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Learning

So with the help of our research partner, Cyentia, we sought to understand more about what types of integrations were most common, how those integrations were achieved, and how those factors played in to varying security outcomes.

Buy vs. Build

More than three quarters of respondents would rather buy security technology than build it themselves, especially when it comes to cloud-based solutions. When evaluating technology, the most successful companies prioritize integration with their current tech stack ahead of base product capabilities.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Learning
Figure 1: Common approaches to security tech integration among all organizations

Furthermore, if companies stick with a platform of products rather than point solutions, they are more than twice as likely to see successfully integrated security technologies. Yes, as we mentioned in the report, we’re fully aware that this bodes well for Cisco, who offers a well-integrated platform of security products. But, don’t forget, this was a double-blind study – the respondents didn’t know who was asking the questions, and Cisco didn’t know who was being surveyed.

Interestingly, we were surprised to learn there’s virtually no difference in security outcomes between those that buy products with out-of-the-box integrations and those that build integrations on their own. Just under half (~49%) of organizations using either of these approaches report strong integration levels.

It would seem for most organizations in most industries that there would be a greater benefit to out-of-box purchasing of products versus building their own. But, as it turns out, this is not the case. As noted above, the real differentiator was doubling down on a cloud-and platform-based solution, probably with a preferred vendor.

Improving Automation


We also wanted to know if having integrated solutions helped with desired outcomes, such as improved automation. Companies with well-integrated security technologies were seven times more likely to achieve high levels of automation for event monitoring, incident analysis, and incident response processes (4.1% vs. 28.5%).

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Learning
Figure 2: Effect of tech integration on extent of security process automation

Of course, it’s not just about automation. If you have a well-integrated security stack, you can optimize the work your security and IT teams do, leading to other preferred outcomes including increased efficiency and employee engagement.

Narrowing Your Focus


If you’re looking to integrate your security stack, where should you initially focus?

We asked this question focusing on the five National Institute of Standards and Technology (NIST) functional areas (Identify, Protect, Detect, Respond, Recover). While integrating any of these five functions had positive outcomes, the Identify function had the biggest boost.  

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation, Cisco Learning
Figure 3: Effect of integrating the NIST CSF Identity function on threat detection capabilities

So, What Do We Recommend?


With security teams stretched thin and ever-evolving threats looming, having a well-integrated security tech stack is a critical step for increasing efficiency and accuracy. But where does your journey towards integration begin?

Based on our survey results, I’d suggest that security teams should:

◉ Investigate automation opportunities: Increased automation is one of the key benefits of an integrated security tech stack. Look for opportunities to automate starting with functions that help identify assets and consider prioritizing those functions when determining where integration can be improved.

◉ Consider buying security technologies, rather than building them yourself: Companies are twice as likely to have a successful security program when they partner with preferred vendors to deliver integrated security solutions. Consider which vendors you regard as “preferred” and include them closely in your security strategy.

◉ Ensure purchasing requirements include security tech integration capabilities: Review your technology RFP requirements to ensure integration with your security stack is included as a core requirement in the new technology selection process. Ability to integrate should be weighted slightly higher than base product capabilities alone.

◉ Look for cloud-based security solutions: The data shows that it’s easier to achieve strong tech integration with cloud-based security products. Where possible, look for cloud security solutions to incorporate into your security stack.

Bottom line: integrated security technology is the best security technology. And I hope our continued research and corresponding recommendations put you on the path to better security outcomes.  

Source: cisco.com

Sunday, 24 April 2022

Security Resilience in EMEA

What makes a successful cybersecurity program and how can organizations improve their resilience in a world that seems increasingly unpredictable? How do we know what actually works and what doesn’t in order to maximize success?

These are the types of burning questions guiding Cisco’s Security Outcomes Study series. In the second edition of the study, Cisco conducted an independent, double-blind survey of over 5,100 IT professionals in 27 countries. This article highlights data from the latest volume to focus on security resilience in the region spanning Europe, Middle East and Africa (EMEA).

The study focuses on a dozen outcomes that contribute to overall security program success. Four of them in particular are crucial for building resilience:

◉ Keeping up with the business (Security should enable, not impede)

◉ Avoiding major incidents (…And their business impacts)

◉ Maintaining business continuity (…Even when disaster strikes)

◉ Retaining talented personnel (You can’t stay on top when top staff won’t stay)

Assessing Security Resilience in EMEA

We calculated an overall resilience score for each surveyed organization based on their ratings for the outcomes listed above. The chart below compares that score across the three global regions. Organizations in the Americas scored a scant 1.7% better than the global average, while EMEA organizations landed about 2% below that mark. And the width of the gray error bars further diminishes those differences. Overall, we simply don’t see huge discrepancies in security resilience at the regional level.

Cisco, Cisco Certification, Cisco Exam, Cisco Learning, Cisco Guides, Cisco Career, Cisco Jobs, Cisco Preparation Exam
Regional comparison of mean security resilience score

When examining resilience at the country level, however, differences begin to emerge. The next chart shows the proportion of organizations in each country reportedly “excelling” in each of the four outcomes related to security resilience. In other words, about 48% of firms in Saudi Arabia say their security program is doing a great job keeping up with the business. About 37% excel at maintaining business continuity, and so on. So, pick your country of interest and trace its success level across each outcome.

Cisco, Cisco Certification, Cisco Exam, Cisco Learning, Cisco Guides, Cisco Career, Cisco Jobs, Cisco Preparation Exam
Country-level comparison of reported success levels for security resilience outcomes

Interested in comparing countries in the EMEA region across all 12 security outcomes beyond those shown here for resilience? Download the EMEA spinoff of the Security Outcomes Study, Volume 1.

Perhaps the most interesting aspect of this chart is the comparison it provides among countries. The reported success rates by security professionals in the countries at the top are roughly twice that of those on the bottom. And for the most part, each country maintains its relative position across all outcomes.

The obvious question here is what lies behind these apparent differences in security resilience among countries? Is Saudi Arabia really that much more resilient than Germany? Might German organizations have a more realistic grasp of what it means to be resilient and know there’s a lot of work left to do? Perhaps it’s somewhere between those possibilities or something else altogether.

Whatever the reason, the key takeaway here is that success rates for all countries indicate that organizations aren’t as successful as they’d like to be in the area of security resilience.

Improving Security Resilience in EMEA


How can organizations in the EMEA region improve those outcomes, thereby making their firms more resilient? That’s an excellent question and one we were eager to explore in the Security Outcomes Study. The study revealed five security practices—affectionately referred to as the Fab Five—that boost security program success more than any others. If you’d like a lot more information about the Fab Five and how to maximize their effectiveness, the latest edition of the Security Outcomes Study is the place to go.

Cisco, Cisco Certification, Cisco Exam, Cisco Learning, Cisco Guides, Cisco Career, Cisco Jobs, Cisco Preparation Exam
The Fab Five: Highly effective practices for achieving security program outcomes

Before we examine how these practices improve resilience, let’s first check how well each country has implemented each of the Fab Five. The chart below mimics the one above for outcomes and is interpreted similarly. Once again, we see Saudi Arabia reporting the strongest implementation of these practices and Germany reporting the lowest. Countries shift around quite a bit beyond that.

Cisco, Cisco Certification, Cisco Exam, Cisco Learning, Cisco Guides, Cisco Career, Cisco Jobs, Cisco Preparation Exam
Country-level comparison of reported success levels for five leading security practices

As with the outcomes chart, reasons behind these country-level differences are difficult to pinpoint. We suspect there’s a mix of maturity, cultural, and organizational factors at play. But hey, if you have thoughts, we’d love to hear them. Use #SecurityOutcomes on LinkedIn or Twitter to get our attention.

Remember that security resilience score we shared above for the regions? Great, because it’s coming back into play in this next chart. We wanted to test whether practicing the Fab Five actually improved resilience among EMEA organizations participating in our study. As seen in the chart below, that’s a definitive “Yes!”

Organizations that don’t do any of these practices well ranked in the bottom 25% for resilience, whereas those strong in all five reversed that standing and rose into the top 25%!

Cisco, Cisco Certification, Cisco Exam, Cisco Learning, Cisco Guides, Cisco Career, Cisco Jobs, Cisco Preparation Exam
Effect of implementing five leading security practices on overall resilience score

Resilience has always been critical for cybersecurity. However, the last several years have really driven home the point that organizational defenders must be ready for anything. We hope this analysis demonstrates two things: 1) Organizations in the EMEA region have room for improving security resilience, and 2) It is actually possible to do so.

Source: cisco.com

Saturday, 23 April 2022

Cisco Extends Service Discovery to WAN Using Bonjour and Adds App on Cisco DNA Center

Cisco Certification, Cisco Career, Cisco Skill, Cisco Jobs, Cisco BYOD, Cisco WAN, Cisco Preparation, Cisco DNA Center

Bring Your Own Device (BYOD) is now common in enterprises, especially in vertical industries like education and healthcare. So service discovery―the ability to automatically detect devices and services on networks and to set policies to safeguard networks―has become vital.  

There are many service-discovery protocols and techniques available that have been used for various use cases. Bonjour uses Multicast Domain Name System (mDNS) as its underlying mechanism to discover the services nearby. Apple developed Bonjour in 2002 to replace AppleTalk. Due to its open standards design and wide adoption, Bonjour/mDNS was integrated with Microsoft Windows 10, Google Android devices, and with Cisco Webex, making it a de facto standard. 

Bonjour was designed for use in a single network (with a single subnet or a single VLAN), such as a home network, where consumer devices like Apple TVs and printers could be discovered by Macbook, iPhones, and iPads. 

With many devices making their way into enterprises, Cisco has extended Bonjour functionality beyond single Layer 2 broadcast domains, to scale and avoid bottlenecks across services-rich enterprise networks and to optimize network bandwidth in the core and access layers. Additionally, Cisco Digital Network Architecture (DNA) Service for Bonjour on Cisco DNA Center also introduces a new dashboard application that shows service discovery gateways connected to the controller and the service instances. It allows network administrators to control which services can be shared across specific network segments. 

Local Area Bonjour

Casting an image or a video stream from an iPhone to a TV requires an iPhone to discover the TV using mDNS so that it can send that file or data to be casted on the screen. This deployment is called Local Area Bonjour. As shown in Figure 1, a switch could have multiple virtual LANs (VLANs) and by design each of these VLANs map to a different subnet. In such a scenario, if a service querier (e.g., an iPhone) is present in VLAN A, and a service provider (e.g., Apple TV) is present in VLAN B―which is a typical enterprise scenario―it will be unable to discover the service as the multicast from the querier won’t reach the service provider.  

Cisco Certification, Cisco Career, Cisco Skill, Cisco Jobs, Cisco BYOD, Cisco WAN, Cisco Preparation, Cisco DNA Center
Figure 1. Local Area Bonjour

Cisco introduced the Service Discovery Gateway feature, which enables mDNS to operate across Layer 2 boundaries or different subnets. An mDNS gateway can provide transport for service discovery across Layer 2 boundaries by filtering, caching, and extending services from one Layer 2 domain (subnet) to another. Prior to implementation of this feature, mDNS was limited in scope to within a subnet due to the use of link-local scoped multicast addresses.   

Wide Area Bonjour 


Wide Area Bonjour extended the concept of service provider and service querier in different closets or service discovery gateways that need to discover each other (Figure 2). The mDNS gateways are connected to and synchronize services with Cisco DNA Center. The service is shared when another gateway requests it.  

Cisco Certification, Cisco Career, Cisco Skill, Cisco Jobs, Cisco BYOD, Cisco WAN, Cisco Preparation, Cisco DNA Center
Figure 2: Wide Area Bonjour

Cisco’s mDNS gateway solution helps cache services and respond to service queriers on request, enabling the network administrator to configure service policies to control the sharing of services across subnets.  

Using Wide Area Bonjour, network administrators don’t need to bridge these VLANs across network segments anymore, so no service flooding is necessary, thereby reducing the multicast traffic in the core network. This saves a lot of network bandwidth, both in the core and access layers, making the network bandwidth available for other types of traffic while still enabling it to handle service discovery.  

The Cisco Wide Area Bonjour solution eliminates the single Layer 2 domain constraint and expands the scope to enterprise-grade, traditional wired and wireless networks, including overlay networks such as Cisco Software-Defined Access (SD-Access) and industry-standard Border Gateway Protocol (BGP) Ethernet VPN (EVPN) with Virtual Extensible LAN (VXLAN). The Cisco Catalyst 9000 series LAN switches and wireless LAN controllers follow the industry standard, RFC 6762-based mDNS specification to support interoperability with various compatible wired and wireless consumer products in enterprise networks. 

The Cisco DNA Service for Bonjour  


Cisco has now integrated Bonjour service discovery features into Cisco DNA Center. The new Cisco DNA Service for Bonjour features a software-defined, controller-based solution that includes a dashboard that shows the service discovery gateways connected to the controller and the number of service instances in a Wide Area Bonjour topology (Figure 3). It allows network administrators to control which services can be shared across which network segment.

Cisco Certification, Cisco Career, Cisco Skill, Cisco Jobs, Cisco BYOD, Cisco WAN, Cisco Preparation, Cisco DNA Center
Figure 3. Wide Area Bonjour Application on Cisco DNA Center

The new Cisco DNA Service for Bonjour enables end-to-end service-oriented enterprise networks that augment all the key benefits to zero-configuration mDNS technology. With services and feature-rich user devices proliferating on enterprise networks, Cisco DNA Service for Bonjour can help improve the ability of IT and end-users to access, manage, share, print, and synchronize data regardless of their network boundaries. The seamless integration and security provided by the solution is compelling, providing IT organizations with complete control of access security, role and location-based discovery, and management of devices across the enterprise network.

Source: cisco.com