Showing posts with label Cisco DevNet Certification. Show all posts
Showing posts with label Cisco DevNet Certification. Show all posts

Thursday, 27 June 2024

Cisco API Documentations Is Now Adapted for Gen AI Technologies

Developer experience changes rapidly. Many developers and the Cisco DevNet community utilize Generative AI tools and language models for code generation and troubleshooting.

Better data = better model completion

The main challenge for GenAI users is finding valid data for their prompts or Vector Databases. Developers and engineers need to care about the data they plan to use for LLMs/GenAI interaction.

OpenAPI documentations is now available to download


The OpenAPI documentation is a specification that defines a standard way to describe RESTful APIs, including endpoints, parameters, request/response formats, and authentication methods, promoting interoperability and ease of integration.

We at Cisco DevNet care about developers’ experience and want to make your experience working with Cisco APIs efficient and with minimal development/testing costs.

You can find links to OpenAPI documentation in JSON/YAML format here: Open API Documentation page and Search related product API – Navigate to API Reference -> Overview section in left-side menu

Note: Some API documentation can contain multiple OpenAPI Documents

For which purpose you can use related OpenAPI documentation as a part of prompt/RAG:

  • Construct code or script that utilizes related Cisco API
  • Find related API operations or ask to fix existing code using the information in the API documentation
  • Create integrations with Cisco products through API
  • Create and test AI agents
  • Utilize related Cisco OpenAPI documentation locally or using approved AI tools in your organization.

Structured vs Unstructured data


I’ve compared two LLM model completions with a prompt that contains two parts. The first part of the prompt was the same and contained the following information:

Based on the following API documentation, please write step-by-step instructions that can help automatically tag roaming computers using Umbrella API.
High-level workflow description:

  1. Add API Key
  2. Generate OAuth 2.0 access token
  3. Create tag
  4. Get the list of roaming computers and identify related ‘originId’
  5. Add tag to devices.

API documentation:

Second part:

In one case, it contains copy and paste data directly from the doc,
The other one contains LLM-friendly structured data like OpenAPI documents pasted one by one

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Part of CDO OpenAPI documentation

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Claude 3 Sonnet model completion. Prompt with OpenAPI documents 

Cisco API Documentations Is Now Adapted for Gen AI Technologies
Claude 3 Sonnet model completion. Prompt with copy and paste data

Benefits of using LLM-friendly documentation as a part of the prompt


I’ve found that model output was more accurate when we used OpenAPI documents as a part of a prompt. API endpoints provided in each step were more accurate. Recommendations in sections like “Get List of Roaming Computers” contain better and more optimal instructions and API operations.

Source: cisco.com

Saturday, 31 December 2022

Get Hands-on in the Cisco Crosswork Automation Sandbox

Cisco Crosswork Network Automation is a microservices platform that brings together streaming telemetry, big data, and model-driven application programming interfaces (APIs) to redefine how service providers conduct network operations. Cisco Crosswork Network Automation offers a platform to collaborate, and build an application ecosystem around on-box innovation.

The Cisco Crosswork Network Automation product suite is a highly scalable and efficient operations automation framework. It enables service providers to quickly deploy intent-driven, closed-loop operations. You can plan, implement, run, monitor, and perfect your service provider network automation, and gain mass awareness, augmented intelligence, and proactive control for data-driven, outcome-based network automation.

Streamline Network Operation Processes


Automation plays a significant role in helping organizations move more quickly by streamlining operational processes such as:

◉ Executing workflows at machine speed with high operational efficiency and repeatable quality
◉ Bridging and synchronizing business and Information Technology (IT) processes to cut gaps and improve customer experience
◉ Supplying analytics to improve decision-making and shorten fault resolution times

Lab, Test, and Build in the New Sandbox


Now you can lab, test and build with the new Cisco Crosswork Automation Sandbox. This new sandbox lets you:

◉ Monitor key performance indicators (KPIs) in real time
◉ Prepare network changes triggered by changes in KPIs
◉ Roll out these changes automatically
◉ Automated change-impact and security analysis

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation

Production Crosswork Suite within the Sandbox


You will find a “production” Crosswork suite deployed to manage the multi-platform network within the sandbox lab. This network is made up of:

◉ Cisco Crosswork cluster
◉ Cisco Crosswork Data Gateway (CDG)
◉ Cisco Network Service Orchestrator (NSO)
◉ Cisco IOS XE/XR routers

Included in the sandbox is a new use case which will help understanding the Applications of Health Insights and Change Automation.  In this scenario, we want to showcase how to attach and detach the devices from Crosswork Data Gateway (CDG). As a part of the scenario, we will also showcase how to change the credentials at the device level.

Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation

◉ Scenario 1: Device Level Management: Showcase how to attach and detach the devices from Crosswork Data Gateway (CDG). As a part of the scenario, we will also highlight how to change the credentials at the device level

◉ Scenario 2: Health Insights Application Overview: See how Cisco Crosswork Health Insights offers real-time, telemetry-based Key Performance Indicator (KPI) monitoring and intelligent alerting.

◉ Scenario 2A: Create and enable KPI profiles: In this scenario, KPIs are provisioned on IOS-XR devices via a KPI Profile. The KPIs can be either GNMI, MDT, or SNMP protocol based. We can then enable the KPIs and verify that the respective data are being collected and visually presented on Health Insights

◉ Scenario 3: Network Automation Application Overview Learn how to codify workflows using parameterized Plays and stitch them into Playbooks for execution in a step-by-step or single-step fashion.

◉ Scenario 3A: Playbook execution. Now we have our code, let us define an automation task to achieve the intended network states in Change Automation using Playbooks

Source: cisco.com

Sunday, 28 August 2022

New Learning Labs for NSO Service Development

Getting started with network automation can be tough. It is worth the effort though, when a product like Cisco Network Services Orchestrator (NSO) can to turn your network services into a powerful orchestration engine. Over the past year, we have released a series of learning labs that cover the foundational skills needed to develop with NSO:

◉ Learn NSO the Easy Way

◉ Yang for NSO

◉ XML for NSO

Now we are proud to announce the final piece of the puzzle. We’re bringing it all together with the new service development labs for NSO. If this is your first time hearing about Cisco NSO and service development, let’s review some of the context.

Why change is the only constant

Network programmability has been enhancing our networking builds, changes, and deployments for several years now. For the most part, this was inspired by Software Defined Networks – i.e., networks based on scripting methods, using standard programming languages to control and monitor your network device infrastructure.

Software-defined networking principles can deliver abstractions of existing network infrastructure. This enables faster service development and deployment. Standards such as NETCONF and YANG are currently the driving force behind these abstractions, and are enabling a significant improvement in network management. Scripting can take out a lot of laborious and repetitive tasks. However, it may still have shortfalls, as it can focus on single devices, one vendor, or one platform.

Service orchestration simplifies network operations

Service orchestration simplifies network operations and management of network services. Instead of focusing on a particular device and system configuration that builds a network service, only the important inputs are collected. The rest of the steps and processes for delivery are automated. The actual details, such as vendor-specific configurations on network devices and the correct ordering of steps, are abstracted from the user of the service. This results in consistent configurations, prevention of errors and outages, and overall cost reduction of managing a network.

Remove the complexity

With NSO services, service application maps input parameters to create, modify, and delete a service instance into the resulting native commands to devices in the network. The input parameters are given from a northbound system such as a self-service portal via an API (Application Programming Interface). This calls to NSO or a network engineer using any of the NSO User Interfaces such as the NSO CLI.

Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Prep, Cisco NSO, Cisco

NSO Service Development Module


In this new NSO learning lab you will learn how NSO services simplify network operations, how they work, and how to develop a template-based service. You will also use Python for scripting and service development, and to develop nano services. The module is broken into three sections which will guide you through use cases of NSO Service Developments.

◉ Introduction to NSO Service Development – How NSO services simplify network operations, how they work, and how to develop a template-based service

◉ Python Scripts and NSO Service Development – Python Scripts and NSO Service Development

◉ NSO Nano Service Development – How to develop nano services in NSO

Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Prep, Cisco NSO, Cisco

Try it yourself now


You can find the new NSO Service deployment module in the NSO Basics for Network Operations Learning Track. All these new learning labs can be run and tested in the NSO DevNet reservation sandbox.

One of the things I embrace as an engineer is that change will happen. It might happen overnight, or over an extended period of time. But, it will happen. The only constant in the networking and software industry is ‘change.’ Let’s embrace this!

Source: cisco.com

Saturday, 13 August 2022

First Code… Then Infrastructure as Code… Now Notes as Code!

First, let me say how we take notes and what tools we use are admittedly a personal preference and decision. Hopefully, we are doing it, however!

Most of us are creatures of habit and comfort – we want it simple and effective. When we put that developer hat on as part of our DevOps/SRE or AppDev roles it’s optimal when we can combine our code development environment, or IDE, with a tool that we take notes in. I’m sure most of us are using Microsoft’s Visual Studio Code app as we write Python or Go-based scripts and applications during our network programming and automation work. I probably knocked out 4,500 lines of Python in support of the CiscoLive Network Operations Center (NOC) automation earlier this summer and VS Code was integral to that.

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
Microsoft Visual Studio Code with a CiscoLive NOC Python Script

You’re probably familiar with VS Code’s strong integration with git from your local development environment and the ability to synchronize with remote GitHub repositories. It’s a great feature to ensure version control, provide code backup storage, and encourage collaboration with other developers.

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
GitHub with a CiscoLive NOC Software Repository

I was encouraged to find an extension to VS Code that follows the concept of ‘Docs as Code’. If you’re not familiar, I’d encourage you to follow my esteemed Developer Relations colleague, Anne Gentle, who is leading much innovation in this space. Anne describes this concept in her GitHub repo.

The extension I use is called Dendron. It is more officially known as an open-source document management system. It allows for hierarchical documentation and note-taking. It uses the same, familiar markdown concept for text formatting, document linking and image references, as you would use with GitHubWebex messaging app or Webex API. You can journal and have your thoughts organized in daily buckets. Document templates are supported. I find the supplied meeting notes template as pretty useful and extensible. As a proof of Dendron’s flexibility, I wrote this blog in Dendron before passing over to the publication team!

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
VS Code with Dendron Extension: Note Taking Panel with Preview

I appreciate the hierarchical model of taking notes. I have sections for my team notes, my projects, the partners and customers I’m working with, and one-on-one meeting notes. The hierarchy works down from there. For instance, this note is stored in the VS Code workspace for Dendron, and its vault, as ‘MyProjects.blogs.Notes as Code.md’.  I also have a ‘MyProjects.PiK8s.md’ for a Kubernetes environment on a cluster of Raspberry Pis – more on that soon!

Dendron is capable of efficiently and quickly searching and managing tens of thousands of notes. When I finish a project, I can refactor it into a different hierarchy for archive. The links within the original note are re-referenced, so I don’t lose continuity!

I’m not ready to do this refactor just yet, but here’s a screensnap of it confirming the movement of the note across hierarchies. I tend to put completed projects in a ‘zARCHIVE’ branch.

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
Dendron Extension Using Document Refactor Feature

Dendron also supports advanced diagramming with the mermaid visualization syntax. This next image is a linked screen-capture of the Dendron writing panel adjacent to the preview panel where I imagined a workflow to get this blog posted.

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs

Dendron Markdown with Preview Showing mermaid Flow Chart

Network protocol and software inter-process communication can be documented as sequence diagrams also! Here’s my tongue-in-cheek representation of a DHCP process.

```mermaid
sequenceDiagram
participant Client
participant Router
participant DHCP Server
Client->>Router: I need my IP Address (as broadcast)
Router->>DHCP Server: (forwarded) Get next lease
DHCP Server-->>Router: Here's 192.168.1.100
Router-->>Client: You good with 192.168.1.100?
Client->>Router: Yes, thank you
Router->>DHCP Server: We're all set!
```

The markdown and preview behind the scenes looked like this…

Cisco Certification, Cisco Career, Cisco Learning, Cisco Tutorial and Materials, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
Dendron Markdown with Preview Showing mermaid Sequence Diagram

So, How Can I Use This?


An effective way of using VS Code with Dendron would be in concert with the notetaking and documentation you do for your git repos. Since Dendron notes are effectively text, you can sync them with your git repo and remote GitHub publication as your README.md files, LICENSE.md and CONTRIBUTING.md, which should make up the foundation of your documented project on GitHub.

Source: cisco.com

Sunday, 8 May 2022

Using CI/CD Pipelines for Infrastructure Configuration and Management

Continuous Integration/Continuous Delivery, or Continuous Deployment, pipelines have been used in the software development industry for years. For most teams, the days of manually taking source code and manifest files and compiling them to create binaries or executable files and then manually distributing and installing those applications are long gone. In an effort to automate the build process and distribution of software as well as perform automated testing, the industry has continuously evolved towards more comprehensive pipelines. Depending on how much of the software development process is automated, pipelines can be categorized into different groups and stages:

◉ Continuous Integration is the practice of integrating code that is being produced by developers. On medium to large software projects is common to have several developers or even several teams of developers work on different features or components at the same time. Taking all this code and bringing it to a central location or repository is regularly done using a git based version control system. When the code is merged into a branch on an hourly, daily, weekly or whatever the cadence of the development team is, simple to complex tests can be setup to validate the changes and flush out potential bugs at a very early stage. When performed in an automated fashion, all these steps consist in a continuous integration pipeline.

◉ Continuous Delivery takes the pipeline to the next level by adding software building and release creation and delivery. After the software has been integrated and tested in the continuous integration part of the pipeline, continuous delivery adds additional testing and has the option to deploy the newly built software packages in a sandbox or stage environment for close monitoring and additional user testing. Similar to continuous integration, all steps performed in the continuous delivery part of the pipeline are automated.

◉ Continuous Deployment takes the pipeline to its next and last level. By this stage, the application has been integrated, tested, built, tested some more, deployed in a stage environment and tested even more. The continuous deployment stage takes care of deploying the application in the production environment. Several different deployment strategies are available with different risk factors, cost considerations and complexity. For example, in the basic deployment model, all application nodes are updated at the same time to the new version. While this deployment model is simple it is also the riskiest, it is not outage-proof and does not provide easy rollbacks. The rolling deployment model as the name suggests takes an incremental approach to updating the application nodes. A certain number of nodes are updated in batches. This model provides easier rollback, it is less risky than the basic deployment but at the same time requires that the application runs with both new and old code at the same time. In applications that use the micro-services architecture, this last requirement must be given extra attention. Several other deployment models are available, including canary, blue/green, A/B, etc.

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs
The CI/CD pipeline component of GitLab CE

Why use CI/CD pipelines for infrastructure management


Based on the requirements of the development team, software development pipelines can take different forms and use different components. Version control systems are usually git based these days (github, gitlab, bitbucket, etc.). Build and automation servers such as Jenkins, drone.io, Travis CI, to name just a few, are also popular components of the pipeline. The variety of options and components make the pipelines very customizable and scalable

CI/CD pipelines have been developed and used for years and I think it is finally time to consider them for infrastructure configuration and management. The same advantages that made CI/CD pipelines indispensable from any software development enterprise apply also to infrastructure management. Those advantages include:

◉ automation at the forefront of all steps of the pipeline

◉ version control and historical insight into all the changes

◉ extensive testing of all configuration changes

◉ validation of changes in a sandbox or test environment prior to deployment to production

◉ easy roll-back to a known good state in case an issue or bug is introduced

◉ possibility of integration with change and ticketing systems for true infrastructure Continuous Deployment

I will demonstrate how to use Gitlab CE as a foundational component for a CI/CD pipeline that manages and configures a simple CML simulated network. Several other components are involved as part of the pipeline:

pyATS for creating and taking snapshots of the state of the network both prior and after the changes have been applied
◉ Ansible for performing the configuration changes
◉ Cisco CML to simulate a 4 node network that will act as the test infrastructure

Cisco Certification, Cisco Learning, Cisco Career, Cisco Skills, Cisco Jobs
Simple network simulation in Cisco CML

Stay tuned for a deeper dive


Next up in this blog series we’ll dive deeper into Gitlab CE, and the CI/CD pipeline component.

Source: cisco.com

Monday, 2 May 2022

Securing Your Cloud-Native Application with Cisco App-First Security

We have some exciting news: the popular Application-First Security lab with AWS has been updated, and it is better than ever! It has now been redesigned to follow the Cisco Validated Design “Securing Cloud-Native Applications – AWS Design Guide”. We also have an updated DevNet Sandbox, which you can use to go through this lab. This lab is “ByoAWS”, or bring your own AWS org (unless you are at a proctored Cisco event). That being said, we have a cleanup script that deletes all resources afterwards, so the costs should be minimal when you go through the lab (only a couple of $).

Read More: 700-150: Introduction to Cisco Sales (ICS)

In this lab you’ll deploy the Sock Shop microservices demo application, maintained by Weaveworks and Container Solutions. Sock Shop simulates the user-facing part of an e-commerce website that sells socks. All of the Sock Shop source is on GitHub and you’ll be updating part of the application’s source code in a future portion of the lab.


Cisco Application-First Security


Before we go into the details, let’s take a step back. If you are familiar with Cisco Application-First Security, then you can skip ahead to the updates.

Cisco’s Application-First Security solution enables you to gain visibility into application behavior and increase the effectiveness of security controls by combining capabilities of best-in-class products including Cisco Secure Workload, Cisco Secure Cloud Analytics Cloud, Cisco Duo Beyond and Cisco AppDynamics with Secure Application (not yet part of the lab, coming soon!). Key features include:

◉ Closer to the application: Security closer to your application gives you insight and context of your applications so you can easily make intelligent decisions to protect them.

◉ Continuous as application changes: Application-First Security follows your applications as it changes and moves to ensure continuous protections in your digital business.

◉ Adaptive to application dependencies: Security designed to adapt to your application so it can give you granular control and reduce risk by detecting and preventing threats based on overall understanding of your environment.

In the lab you will secure a cloud-native application (i.e. Sock Shop) and public cloud infrastructure using the earlier mentioned Cisco Solutions. You’ll stage the infrastructure, modify and deploy the application, instrument the security products into the environment. In the process, you’ll get your hands dirty with products and technologies including git, Kubernetes, GitLab, Docker, AWS and others.

What has been updated?


New: Cisco Validated Design

As mentioned, this lab has now been redesigned to follow the Cisco Validated Design “Securing Cloud-Native Applications – AWS Design Guide”. This lab uses AWS to host the workloads and applications and takes advantage of many of their native services. This diagram shows how the different components are logically connected:


Now this diagram obviously doesn’t really show what the end user might see. Below you see a screenshot of the Sock Shop front end page. When first deployed, no security tools are installed yet!


New: GitLab

The lab has been updated to now include GitLab. The deployment of the Kubernetes cluster now works with a GitLab pipeline, to give an example of how this would look like in real world scenario. Pipelines are the top-level component of continuous integration, delivery, and deployment.

Pipelines comprise of jobs and stages:

◉ Jobs, which define what to do. For example, jobs that compile or test code.

◉ Stages, which define when to run the jobs. For example, stages that run tests after stages that compile the code.

In an yml file, you can define the scripts and the commands that you want to run. The scripts are grouped into jobs, and jobs run as part of a larger pipeline. You can group multiple independent jobs into stages that run in a defined order. You should organize your jobs in a sequence that suits your application and is in accordance with the tests you wish to perform. To visualize the process, imagine the scripts you add to jobs are the same as CLI commands you run on your computer to build, test and deploy your application.

New: Development script

Something else that is new is a deployment bash script that will automatically do all of the preparation steps for you. The nice thing about this is that if you only want to do the Secure Workload, Secure Cloud Analytics or only the Duo lab section, you can do that now. Before this lab was not that modular, and took in total at least 4 hours. To do this, all you need to do is run deployinfraforme from the AWS Cloud9 terminal window and you can choose. Obviously, we recommend going through the entire lab, since setting up the Kubernetes cluster is very educational.

Source: cisco.com

Tuesday, 26 April 2022

How To Do DevSecOps for Kubernetes

In this article, we’ll provide an overview of security concerns related to Kubernetes, looking at the built-in security capabilities that Kubernetes brings to the table.

Kubernetes at the center of cloud-native software

Since Docker popularized containers, most non-legacy large-scale systems use containers as their unit of deployment, in both the cloud and private data centers. When dealing with more than a few containers, you need an orchestration platform for them. For now, Kubernetes is winning the container orchestration wars. Kubernetes runs anywhere and on any device—cloud, bare metal, edge, locally on your laptop or Raspberry Pi. Kubernetes boasts a huge and thriving community and ecosystem. If you’re responsible for managing systems with lots of containers, you’re probably using Kubernetes.

The Kubernetes security model

When running an application on Kubernetes, you need to ensure your environment is secure. The Kubernetes security model embraces a defense in depth approach and is structured in four layers, known as the 4Cs of Cloud-Native Security:

Read More: 350-801: Implementing Cisco Collaboration Core Technologies (CLCOR)

1. Cloud (or co-located servers or the corporate datacenter)

2. Container

3. Cluster

4. Code

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Security at outer layers establishes a base for protecting inner layers. The Kubernetes documentation reminds us that “You cannot safeguard against poor security standards in the base layers by addressing security at the Code level.”

At the Cloud layer, security best practices are expected of cloud providers and their infrastructure. Working inward to the Cluster layer, cluster components need to be properly secured, as do applications running in the cluster.

At the Container level, security involves vulnerability scanning and image signing, as well as establishing proper container user permissions.

Finally, at the innermost layer, application code needs to be designed and built with security in mind. This is true whether the application runs in Kubernetes or not.

In addition to the 4 C’s, there are the 3 A’s: authentication, authorization, and admission. These measures apply at the Cluster layer. Secure systems provide resource access to authenticated entities that are authorized to perform certain actions.

Authentication


Kubernetes supports two types of entities: users (human users) and service accounts (machine users, software agents). Entities can authenticate against the API server in various ways that fit different use cases:

◉ X509 client certificates
◉ Static tokens
◉ Bearer tokens
◉ Bootstrap tokens
◉ Service account tokens
◉ OpenID Connect tokens

You can even extend the authentication process with custom workflows via webhook authentication.

Authorization


Once a request is authenticated, it goes through an authorization workflow which decides if the request should be granted.

The main authorization mechanism is role-based access control (RBAC). Each authenticated request has an HTTP verb like GET, POST, or DELETE, and authenticated entities have a role that allows or denies the request. Other authorization mechanisms include attribute-based access control (ABAC), node authorization, and webhook mode.

Admission


Admission control is a security measure that sets Kubernetes apart from other systems. When a request is authorized, it still needs to go through another set of filters. For example, an authorized request may be rejected by an admission controller due to quotas or due to other requests at a higher priority. In addition to validation, admission webhooks can also mutate incoming requests as a way of processing request objects for use before reaching the Kubernetes API server.

In the context of security, pod security admission might add an audit notation or prevent the scheduling of a pod.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Secrets management


Secrets are an important part of secure systems. Kubernetes provides a full-fledged abstraction and robust implementation for secrets management. Secrets are stored in etcd—Kubernetes’ state store—which can store credentials, tokens, SSH keys, and any other sensitive data. It is recommended to store small, sensitive data only as Kubernetes Secrets.

Data encryption


When you want to store a large amount of data, consider using dedicated data stores like relational databases, graph databases, persistent queues, and key-value stores. From the vantage point of security, It’s important to keep your data encrypted both at rest (when it is simply sitting in storage) as well as in transit (when it is sent across the wire). While data encryption is not unique to Kubernetes, the concept must be applied when configuring storage volumes for Kubernetes.

Encryption at rest


There are two approaches to encryption at rest. The first approach uses a data store that encrypts the data for you transparently. The other approach makes the application responsible for encryption, then storing the already-encrypted data in any data store.

Encryption in transit


Eventually, you’ll need to send your data for processing. Because the data is often (necessarily) decrypted at this point, it should be sent over a secure channel. Using  HTTPS, STCP, or SFTP for secure transit of data is best practice.

Kubernetes services can be configured with specific ports like 443 for HTTPS.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Managing container images securely


Kubernetes orchestrates your containers. These containers are deployed as images. Many Kubernetes-based systems take advantage of third-party images from the rich Kubernetes ecosystem. If an image contains vulnerabilities, your system is at risk.

There are two primary measures to safeguard your system. First, use trusted image registries, such as Google Container Registry, AWS Elastic Container Registry, or Azure Container Registry. Alternatively, you may run your own image registry using an open-source project like Harbor and curate exactly which trusted images you allow.

The other measure is to frequently scan images for vulnerabilities as part of the CI/CD process.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Defining security policies


Kubernetes and its ecosystem provide several ways to define security policies to protect your systems. Note that the built-in Kubernetes PodSecurityPolicy resource is deprecated and will be removed in Kubernetes 1.25. At the time of this writing, the Kubernetes community is working on a lightweight replacement. However, the current recommendation is to use a robust third-party project—for example, Gatekeeper, Kyverno, or K-Rail—as a policy controller.

Policies can be used for auditing purposes, to reject pod creation, or to mutate the pod and limit what it can do. By default, pods can receive traffic from any source and send traffic to any destination. Network policies allow you to define the ingress and egress of your pods. The network policy typically translates to firewall rules.

Resource quotas are another type of policy, and they’re particularly useful when multiple teams share the same cluster using different namespaces. You can define a resource quota per namespace and ensure that teams don’t try to provision too many resources. This is also important for security purposes, such as if an attacker gains access to a namespace and tries to provision resources (to perform crypto mining, for example).

Monitoring, alerting, and auditing


We have mostly discussed preventative measures thus far. However, a crucial part of security operations is detecting and responding to security issues. Unusual activity could be a sign that an attack is in progress or that a service is experiencing degraded performance. Note that security issues often overlap with operational issues. For example, an attacker downloading large amounts of sensitive data can cause other legitimate queries to time out or be throttled.

You should monitor your system using standard observability mechanisms like logging, metrics, and tracing. Kubernetes provides built-in logging and metrics for its own components. Once a serious problem is discovered, alerts should be raised to the relevant stakeholders. Prometheus can provide metrics monitoring and alerting, while Grafana provides dashboards and visualizations for those metrics. These tools, along with AppDynamics or countless others, can serve as effective Kubernetes monitoring solutions.

When investigating an incident, you can use the Kubernetes audit logs to check who performed what action at a particular time.

Source: cisco.com

Tuesday, 8 February 2022

What DevSecOps Means for Your CI/CD Pipeline

The CI/CD (Continuous Integration/Continuous Deployment) pipeline is a major ingredient of the DevOps recipe. As a DevSecOps practitioner, you need to consider the security implications for this pipeline. In this article, we will examine key items to think about when it comes to DevSecOps and CI/CD.

The type of CI/CD pipeline you choose—whether it’s managed, open source, or a bespoke solution that you build in-house—will impact whether certain security features are available to you out of the box, or require focused attention to implement.

Let’s dive in

Secret management for your CI/CD pipeline

Your CI/CD pipeline has the keys to the kingdom: it can provision infrastructure and deploy workloads across your system. From a security perspective, the CI/CD pipeline should be the only way to perform these actions. To manage your infrastructure, the CI/CD pipeline needs the credentials to access cloud service APIs, databases, service accounts, and more—and these credentials need to be secure.


Managed or hosted CI/CD pipelines provide a secure way to store these secrets. If you build your CI/CD solution, then you’re in charge of ensuring secrets are stored securely. CI/CD secrets should be encrypted at rest and only decrypted in memory, when the CI/CD pipeline needs to use them.

You should tightly lock down access to the configuration of your CI/CD pipeline. If every engineer can access these secrets, then the potential for leaks is huge. Avoid the temptation to let engineers debug and troubleshoot issues by using CI/CD credentials.

Some secrets (for example, access tokens) need to be refreshed periodically. CI/CD pipelines often use static secrets—which have much longer lifetimes, and so don’t need regular refreshing—to avoid the complexities of refreshing tokens.

Injecting secrets into workloads


Cloud workloads themselves also use secrets and credentials to access other resources and services that their functionality depends on. These secrets can be provided in several ways. If you deploy your system as packages using VM images or containers, then you can bake the secrets directly into the image, making them available in a file when the workload runs.

Another approach is to encrypt the secrets and store them in source control. Then, inject the decryption key into the workload, which can subsequently fetch, decrypt, and use the secrets.

Kubernetes allows for secrets that are managed outside of the workload image but exposed as an environment variable or a file. One benefit of secrets as files is that secret rotation doesn’t require re-deploying the workload.

Infrastructure as code: a security perspective


Infrastructure as code is not only an operational best practice; it is also a security best practice. 

software systems = infrastructure + workloads

When ad hoc changes are made to infrastructure configurations, this drift can introduce security risks. When resources are provisioned without any auditing or governance, it becomes difficult to maintain proper security measures across all resources.

Manage your infrastructure just like you manage your code. Use declarative configurations (like those of  Terraform, AWS CloudFormation, or Kubernetes CRDs). Review and audit every change.

Bring your own security tools


CI/CD pipelines are flexible. Generally speaking, they let you execute a sequence of steps and manage artifacts. The steps themselves are up to you. As a security engineer, you should take advantage of the security tools that already exist in your environment (especially in the cloud). For example, GitHub and GitLab both scan your commits for the presence of secrets or credentials. Some managed CI/CD solutions build in API scanning or application security scans. However, you may also prefer to add tools and checks into the mix.

You could also add static code analysis (like SonarQube) to ensure that code adheres to conventions and best practices. As another example, you mayincorporate vulnerability scanning (like Trivy or Grype) to your CI/CD pipeline, checking container images or third-party dependencies for security flaws.


Comprehensive detection and response


Application observability, monitoring, and alerting are fundamental DevOps Day 2 concerns. Although your CI/CD pipeline is not directly involved in these activities, you should use your CI/CD pipeline to deploy the security tools you use for these purposes. From the point of view of the CI/CD pipeline, these are just additional workloads to be deployed and configured.

Your CI/CD pipeline should include early detection of security issues that trigger on every change that affects workloads or infrastructure. Once changes are deployed, you need to run periodic checks and respond to events that happen post-deployment.

In case of faulty CI/CD, break glass


The CI/CD pipeline is a critical part of your system. If your CI/CD is broken or compromised, your application may continue to run, but you lose the ability to make safe changes. Large scale applications require constant updates and changes. If a security breach occurs, you need to be able to shut down and isolate parts of your application safely.

To do so, your CI/CD pipeline must be highly available and deployed securely. Whenever you need to update, rollback, or redeploy your application, you depend on your CI/CD pipeline.

What should you do if your CI/CD pipeline is broken? Prepare in advance for such a case, determining how your team and system will keep operating (at reduced capacity most likely) until you can fix your CI/CD pipeline. For complicated systems, you should have runbooks. Test how you will operate when the CI/CD is down or compromised.

Source: cisco.com

Saturday, 25 September 2021

Automating AWS with Cisco SecureX

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career

The power of programmability, automation, and orchestration

Automating security operations within the public clouds takes advantage of the plethora of today’s capabilities available and can drive improvements throughout all facets of an organization. Public clouds are built on the power of programmability, automation, and orchestration. Pulling all of these together into a unified mechanism can help deliver robust, elastic, and on-demand services. Services that support the largest of enterprises, or the smallest of organizations or individuals, and everywhere in between.

Providing security AND great customer experience

The success of the major public cloud providers is a testament itself to the power of automation. Let’s face it, Cyber Security isn’t getting any easier, and attackers are only getting more sophisticated. When considering the makeup of today’s organizations, as well as those of the future, a few key points are worth consideration.

Read More: 500-173: Designing the FlexPod Solution (FPDESIGN)

First, the shift to a significantly remote workforce it here to stay. Post-pandemic there will certainly be a significant number of employees returning to the office. However, the flexibility so many have gotten used to, will likely remain a reality and must be accounted for by SecOps teams.

Secondly, physical locations, from manufacturing facilities and office space, to branch coffee shops, not everything has the ability to go virtual and we, as security practitioners, are left with a significant challenge. How do we provide comprehensive security, alongside seamless customer, and top-notch user experience?

Clearly the answer is automation

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career
The SecureX AWS Relay Module consolidates monitoring your AWS environment.

Leveraging the flexibility of Cisco’s SecureX is a great place to begin your organization’s cloud automation journey. Do this by deploying the SecureX AWS Relay Module. This module immediately consolidates monitoring your AWS environment, right alongside the rest of the security tools within the robust SecureX platform. Within the module are three significant components:

◉ Dashboard tiles providing high level metrics around the infrastructure, IAM, and network traffic, as a means of monitoring trends and bubbling up potential issues.

◉ Threat Response, with features that facilitate deep threat hunting capabilities by evaluating connection events between compute instances and remote hosts, while also providing enrichment on known suspicious or malicious observables such as remote IP addresses or file hashes.

◉ Response capabilities allow for the immediate segmentation of instances as a means of blocking lateral spread or data exfiltration, all from within the Threat Response console.

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career
The SecureX enterprise grade workflow orchestration engine offers low or no-code options for automating your AWS, environment

Customizable automaton and orchestration capabilities


The SecureX Relay Module provides some great capabilities, however there are many operations that an organization needs to perform that fall outside the scope of its native capabilities. To help manage those, and provide highly customizable automaton and orchestration capabilities, there is SecureX Orchestration. This enterprise grade workflow orchestration engine offers low or no-code options for automating your AWS, environment and many, many, more.

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career

SecureX Orchestration operates by leveraging workflows as automation mechanisms that simply go from start-to-end and perform tasks ranging from individual HTTP API calls, to pre-built, drag and drop, operations known as Atomic Actions. These “Atomics” allow for the consumption of certain capabilities without the need to manage the underlying operations. Simply provide the necessary inputs, and they will provide the desired output. These operations can be performed with all the same programmatic logic such as conditional statements, loops, and even parallel operations.

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career
Libraries of built-in Atomics (including for AWS) let you conduct custom operations in your cloud environment through simple drag and drop workflows.

Included with every SecureX Orchestration deployment are libraries of built-in Atomics including a robust one for AWS. From operations such as getting metrics, to creating security groups, or VPC’s, a multitude of custom operations can be conducted in your cloud environment through simple drag and drop workflows. Do you have a defined process for data gathering, or routine operations that needs to be performed? By creating workflows, and assigning a schedule, all of these operations can be completed with consistency and precision, freeing up time to address additional business critical operations.

Cisco SecureX, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Study Material, Cisco Career

A more effective SecOps team


By combining built in SecureX Orchestration workflows with additional custom ones critical to your organizations processes, end-to-end automation of time sensitive, business critical tasks can be achieved with minimal development. Used in conjunction with the SecureX AWS Relay module, and your organization has at its disposal a fully featured, robust set of monitoring, deployment, management, and response capabilities that can drastically improve velocity, consistency, and the overall effectiveness of any organizations SecOps team.

Thursday, 29 July 2021

How to Pass Cisco 200-901 DEVASC Exam Practice Test

Information technology has transformed our lives entirely in the present day. Both organizations and individuals are excited as regards cloud computing, and leading organizations are interested in engaging skilled IT professionals to enforce the latest technologies to better their business operations. The 200-901 DEVASC is the required exam that you need to take to achieve the Cisco Certified DevNet Associate certification that will confirm your skills in Automation, cloud computing, and infrastructure of networks and will qualify you for job profiles such as software developers, DevOps engineers, and automation specialists. Though, you will be distinctive from other cloud computing professionals as your skills will be confirmed by one of the top vendors of most-sought-after IT certifications in the world - Cisco.

All the Detailed Information of Cisco 200-901 DEVASC Exam

Cisco 200-901 is indeed essential for your career as it can help you acquire advanced skills in software development and Automation. If you register to take this exam, you will be examined on the following topics:

  • Software Development and Design (15%)
  • Understanding and Using APIs (20%)
  • Cisco Platforms and Development (15%)
  • Application Deployment and Security (15%)
  • Infrastructure and Automation (20%)
  • Network Fundamentals (15%)

When it comes to the prerequisites of this exam, they are simple. Cisco does state that applicants should have a profound knowledge of the topics assessed by the Cisco 200-901 exam. Also, your chances of passing the exam are higher if you have work experience of one year working as a software developer and worked with Python programming prior.

When it comes to the peculiarities of this certification, applicants will have to answer 90-110 questions in two hours. Hence, you require to have a solid understanding of all the exam topics if you want to have sufficient time to answer all the questions. That is why it's essential to obtain the 200-901 syllabus topics before you start preparation. This will help you understand what preparation resources you require to use to acquire the right skills to pass your exam.

Cisco 200-901 DEVASC Exam Preparation Options

Understanding the exam objectives and their sun topics is the first step that you should take to in your preparation journey. After knowing these domains and all the topics, the next step incorporates determining what study materials will offer the understanding required for each topic.

Cisco itself provides the training course and other helpful resources to acquire relevant skills to tackle these Cisco exam questions.

Cisco training is important for those aspirants who want to prepare and pass the test on the first attempt. A certified instructor will give you all the required knowledge to ace 200-901 exam questions and get a passing score. Apart from the official course, other useful Cisco DevNet 200-901 study resources, comprising e-Learning, hands-on labs, and online videos. You can also buy Cisco Certified DevNet Associate DEVASC 200-901 Official Cert Guide from Amazon and Cisco-press store.

You can also take advantage of some third-party sources and attempt Cisco 200-901 practice tests. Most applicants choose this option as an excellent addition to their preparation methods to get even more possibilities to crack Cisco 200-901 with an amazing score. With DEVASC 200-901 practice exam, you can perceive what score you can get in the actual exam. If you answer some questions wrong, you can review the correct answers and go back to the topic and work on it and improve this area.

How Will Your Career Benefit from the Cisco 200-901 DEVASC Exam?

There is a massive upswing in Information Technology professionals in today's world; passing the Cisco 200-901 exam and becoming Cisco Certified DevNet Associate has its advantages. Because of the prevalence of Cisco, it is straightforward to perceive why professionals with Cisco certifications are distinguished over those who don't have certification. Other than standing out from the crowd of non-certified professionals, you also get an opportunity to evaluate and confirm your skillset.

On the other hand, after thorough preparation, you will not only perceive software design and development techniques, APIs, Cisco platform, application deployment, security, Automation, and network, but you will also get to authenticate yourself that you are a skilled DevNet professional and give organizations a solid reason to employ you. And if you're already working professionally in the network field, you will see a rise in your salary due to the 200-901 exam and appropriate associate-level Cisco certification. For instance, a Network Engineer with certified Cisco Networking skills can qualify to receive almost $75k a year, as reported by Payscale.com.

Conclusion

Any professionals who hold Cisco certification are reliable to be reassuring better performing and in their careers. The same refers to the applicants who passed the Cisco 200-901 DEVASC exam and achieved the Cisco Certified DevNet Associate certification. So, if you are a software developer, DevOps engineer, system integration programmer, network automation engineer, or any relevant IT professional, do not delay to sit for the Cisco DevNet Associate exam and validate your expertise in working with Cisco network to Cisco APIs to get even higher toward your professional goal.