Showing posts with label Cisco Learning. Show all posts
Showing posts with label Cisco Learning. Show all posts

Thursday 18 January 2024

How to Use Ansible with CML

How can Ansible help people building simulations with Cisco Modeling Labs (CML)?

Similar to Terraform, Ansible is a common, open-source automation tool often used in Continuous Integration/Continuous Deployment (CI/CD) DevOps methodologies. They are both a type of Infrastructure as Code (IaC) or Infrastructure as Data that allow you to render your infrastructure as text files and control it using tools such as Git. The advantage is reproducibility, consistency, speed, and the knowledge that, when you change the code, people approve, and it gets tested before it’s pushed out to your production network. This paradigm allows enterprises to run their network infrastructure in the same way they run their software and cloud practices. Afterall, the infrastructure is there to support the apps, so why manage them differently? 

Although overlaps exist in the capabilities of Terraform and Ansible, they are very complementary. While Terraform is better at the initial deployment and ensuring ongoing consistency of the underlying infrastructure, Ansible is better at the initial configuration and ongoing management of the things that live in that infrastructure, such as systems, network devices, and so on. 

In a common workflow in which an operator wants to make a change to the network, let’s say adding a new network to be advertised via BGP, a network engineer would specify that change in the code or more likely as configuration data in YAML or JSON. In a typical CI workflow, that change would need to be approved by others for correctness or adherence to corporate and security concerns, for instance. In addition to the eyeball tests, a series of automated testing validates the data and then deploys the proposed change in a test network. Those tests can be run in a physical test network, a virtual test network, or a combination of the two. That flow might look like the following:

How to Use Ansible with CML

The advantage of leveraging virtual test networks is profound. The cost is dramatically lower, and the ability to automate testing is increased significantly. For example, a network engineer can spin up and configure a new, complex topology multiple times without the likelihood of old tests messing up the accuracy of the current testing. Cisco Modeling Labs is a great tool for this type of test. 

Here’s where the Ansible CML Collection comes in. Similar to the CML Terraform integration covered in a previous blog, the Ansible CML Collection can automate the deployment of topologies in CML for testing. The Ansible CML Collection has modules to create, start, and stop a topology and the hosts within it, but more importantly, it has a dynamic inventory plugin for getting information about the topology. This is important for automation because topologies can change. Or multiple topologies could exist, depending on the tests being performed. If your topology uses dynamic host configuration protocol (DHCP) and/or CML’s PATty functionality, the information for how Ansible communicates with the nodes needs to be communicated to the playbook. 

Let’s go over some of the features of the Ansible CML Collection’s dynamic inventory plugin. 

First, we need to install the collection: 

ansible-galaxy collection install cisco.cml 

Next, we create a cml.yml in the inventory with the following contents to tell Ansible to use the Ansible CML Collection’s dynamic inventory plugin: 

plugin: cisco.cml.cml_inventory 

group_tags: network, ios, nxos, router

In addition to specifying the plugin name, we can also define tags that, when found on the devices in the topology, add that device to an Ansible group to be used later in the playbook:

How to Use Ansible with CML

In addition to specifying the plugin name, we can also define tags that, when found on the devices in the topology, add that device to an Ansible group to be used later in the playbook:

  • CML_USERNAME: Username for the CML user
  • CML_PASSWORD: Password for the CML user
  • CML_HOST: The CML host
  • CML_LAB: The name of the lab 

Once the plugin knows how to communicate with the CML server and which lab to use, it can return information about the nodes in the lab: 

ok: [hq-rtr1] => { 

    "cml_facts": { 

        "config": "hostname hq-rtr1\nvrf definition Mgmt-intf\n!\naddress-family ipv4\nexit-address-family\n!\naddress-family ipv6\nexit-address-family\n!\nusername admin privilege 15 secret 0 admin\ncdp run\nno aaa new-model\nip domain-name mdd.cisco.com\n!\ninterface GigabitEthernet1\nvrf forwarding Mgmt-intf\nip address dhcp\nnegotiation auto\nno cdp enable\nno shutdown\n!\ninterface GigabitEthernet2\ncdp enable\n!\ninterface GigabitEthernet3\ncdp enable\n!\ninterface GigabitEthernet4\ncdp enable\n!\nip http server\nip http secure-server\nip http max-connections 2\n!\nip ssh time-out 60\nip ssh version 2\nip ssh server algorithm encryption aes128-ctr aes192-ctr aes256-ctr\nip ssh client algorithm encryption aes128-ctr aes192-ctr aes256-ctr\n!\nline vty 0 4\nexec-timeout 30 0\nabsolute-timeout 60\nsession-limit 16\nlogin local\ntransport input ssh\n!\nend", 

        "cpus": 1, 

        "data_volume": null, 

        "image_definition": null, 

        "interfaces": [ 

            { 

                "ipv4_addresses": null, 

                "ipv6_addresses": null, 

                "mac_address": null, 

                "name": "Loopback0", 

                "state": "STARTED" 

            }, 

            { 

                "ipv4_addresses": [ 

                    "192.168.255.199" 

                ], 

                "ipv6_addresses": [], 

                "mac_address": "52:54:00:13:51:66", 

                "name": "GigabitEthernet1", 

                "state": "STARTED" 

            } 

        ], 

        "node_definition": "csr1000v", 

        "ram": 3072, 

        "state": "BOOTED" 

    } 


The first IPv4 address found (in order of the interfaces) is used as `ansible_host` to enable the playbook to connect to the device. We can use the cisco.cml.inventory playbook included in the collection to show the inventory. In this case, we only specify that we want devices that are in the “router” group created by the inventory plugin as informed by the tags on the devices: 

mdd % ansible-playbook cisco.cml.inventory --limit=router 

ok: [hq-rtr1] => { 

    "msg": "Node: hq-rtr1(csr1000v), State: BOOTED, Address: 192.168.255.199:22" 


ok: [hq-rtr2] => { 

    "msg": "Node: hq-rtr2(csr1000v), State: BOOTED, Address: 192.168.255.53:22" 


ok: [site1-rtr1] => { 

    "msg": "Node: site1-rtr1(csr1000v), State: BOOTED, Address: 192.168.255.63:22" 


ok: [site2-rtr1] => { 

    "msg": "Node: site2-rtr1(csr1000v), State: BOOTED, Address: 192.168.255.7:22" 


In addition to group tags, the CML dynamic inventory plugin will also parse tags to pass information from PATty and to create generic inventory facts:

How to Use Ansible with CML

If a CML tag is specified that matches `^pat:(?:tcp|udp)?:?(\d+):(\d+)`, the CML server address (as opposed to the first IPv4 address found) will be used for `ansible_host`. To change `ansible_port` to point to the translated SSH port, the tag `ansible:ansible_port=2020` can be set. These two tags tell the Ansible playbook to connect to port 2020 of the CML server to automate the specified host in the topology. The `ansible:` tag can also be used to specify other host facts. For example, the tag `ansible:nso_api_port=2021` can be used to tell the playbook the port to use to reach the Cisco NSO API. Any arbitrary fact can be set in this way. 

Getting started 

Trying out the CML Ansible Collection is easy. You can use the playbooks provided in the collection to load and start a topology in your CML server. To start, define the environment variable that tells the collection how to access your CML server: 

% export CML_HOST=my-cml-server.my-domain.com 

% export CML_USERNAME=my-cml-username 

% export CML_PASSWORD=my-cml-password 

The next step is to define your topology file. This is a standard topology file you can export from CML. There are two ways to define the topology file. First, you can use an environment variable: 

% export CML_LAB=my-cml-labfile 

Alternatively, you can specify the topology file when you run the playbook as an extra–var. For example, to spin up a topology using the built in cisco.cml.build playbook: 

% ansible-playbook cisco.cml.build -e wait='yes' -e cml_lab_file=topology.yaml 

This command loads and starts the topology; then it waits until all nodes are running to complete. If -e startup=’host’ is specified, the playbook will start each host individually as opposed to starting them all at once. This allows for the config to be generated and fed into the host on startup. When cml_config_file is defined in the host’s inventory, it is parsed as a Jinja file and fed into that host as config at startup. This allows for just-in-time configuration to occur. 

Once the playbook completes, you can use another built-in playbook, cisco.cml.inventory, to get the inventory for the topology. In order to use it, first create a cml.yml in the inventory directory as shown above, then run the playbook as follows: 

% ansible-playbook cisco.cml.inventory 

PLAY [cml_hosts] ********************************************************************** 

TASK [debug] ********************************************************************** 

ok: [WAN-rtr1] => { 

    "msg": "Node: WAN-rtr1(csr1000v), State: BOOTED, Address: 192.168.255.53:22" 


ok: [nso1] => { 

    "msg": "Node: nso1(ubuntu), State: BOOTED, Address: my-cml-server.my-domain.com:2010" 


ok: [site1-host1] => { 

    "msg": "Node: site1-host1(ubuntu), State: BOOTED, Address: site1-host1:22" 


In this truncated output, three different scenarios are shown. First, WAN-rtr1 is assigned the DHCP address it received for its ansible_host value, and ansible port is 22. If the host running the playbook has IP connectivity (either in the topology or a network connected to the topology with an external connector), it will be able to reach that host. 

The second scenario shows an example of the PATty functionality with the host nso1 in which the dynamic inventory plugin reads those tags to determine that the host is available through the CML server’s interface (i.e. ansible_host is set to my-cml-server.my-domain.com). Also, it knows that ansible_port should be set to the port specified in the tags (i.e. 2010). After these values are set, the ansible playbook can reach the host in the topology using the PATty functionality in CML. 

The last example, site1-host1, shows the scenario in which the CML dynamic inventory script can either find a DHCP allocated address or tags to specify to what ansible_host should be set, so it uses the node name. For the playbook to reach those hosts, it would have to have IP connectivity and be able to resolve the node name to an IP address. 

These built-in playbooks show examples of how to use the functionality in the CML Ansible Collection to build your own playbooks, but you can also use them directly as part of your pipeline. In fact, we often use them directly in the pipelines we build for customers. 

Source: cisco.com

Thursday 30 November 2023

Making Your First Terraform File Doesn’t Have to Be Scary

Making Your First Terraform File Doesn’t Have to Be Scary

For the past several years, I’ve tried to give at least one Terraform-centric session at Cisco Live. That’s because they’re fun and make for awesome demos. What’s a technical talk without a demo? But I also see huge crowds every time I talk about Terraform. While I wasn’t an economics major, I do know if demand is this large, we need a larger supply!

That’s why I decided to step back and focus to the basics of Terraform and its operation. The configuration applied won’t be anything complex, but it should explain some basic structures and requirements for Terraform to do its thing against a single piece of infrastructure, Cisco ACI. Don’t worry if you’re not an ACI expert; deep ACI knowledge isn’t required for what we’ll be configuring.

The HCL File: What Terraform will configure


A basic Terraform configuration file is written in Hashicorp Configuration Language (HCL). This domain-specific language (DSL) is similar in structure to JSON, but it adds components for things like control structures, large configuration blocks, and intuitive variable assignments (rather than simple key-value pairs).

At the top of every Terraform HCL file, we must declare the providers we’ll need to gather from the Terraform registry. A provider supplies the linkage between the Terraform binary and the endpoint to be configured by defining what can be configured and what the API endpoints and the data payloads should look like. In our example, we’ll only need to gather the ACI provider, which is defined like this:

terraform {

  required_providers {

    aci = {

      source = “CiscoDevNet/aci”

    }

  }

}

Once you declare the required providers, you have to tell Terraform how to connect to the ACI fabric, which we do through the provider-specific configuration block:

provider "aci" {

username = "admin"

password = "C1sco12345"

url      = "https://10.10.20.14"

insecure = true

}

Notice the name we gave the ACI provider (aci) in the terraform configuration block matches the declaration for the provider configuration. We’re telling Terraform the provider we named aci should use the following configuration to connect to the controller. Also, note the username, password, url, and insecure configuration options are nested within curly braces { }. This indicates to Terraform that all this configuration should all be grouped together, regardless of whitespaces, indentation, or the use of tabs vs. spaces.

Now that we have a connection method to the ACI controller, we can define the configuration we want to apply to our datacenter fabric. We do this using a resource configuration block. Within Terraform, we call something a resource when we want to change its configuration; it’s a data source when we only want to read in the configuration that already exists. The configuration block contains two arguments, the name of the tenant we’ll be creating and a description for that tenant.

resource "aci_tenant" "demo_tenant" {

name        = "TheU_Tenant"

description = "Demo tenant for the U"

}

Once we write that configuration to a file, we can save it and begin the process to apply this configuration to our fabric using Terraform.

The Terraform workflow: How Terraform applies configuration


Terraform’s workflow to apply configuration is straightforward and stepwise. Once we’ve written the configuration, we can perform a terraform init, which will gather the providers from the Terraform registry who have been declared in the HCL file, install them into the project folder, and ensure they are signed with the same PGP key that HashiCorp has on file (to ensure end-to-end security). The output of this will look similar to this:

[I] theu-terraform » terraform init


Initializing the backend...


Initializing provider plugins...

- Finding latest version of ciscodevnet/aci...

- Installing ciscodevnet/aci v2.9.0...

- Installed ciscodevnet/aci v2.9.0 (signed by a HashiCorp partner, key ID 433649E2C56309DE)


Partner and community providers are signed by their developers.

If you'd like to know more about provider signing, you can read about it here:

https://www.terraform.io/docs/cli/plugins/signing.html

Terraform has created a lock file .terraform.lock.hcl to record the provider

selections it made above. Include this file in your version control repository

so that Terraform can guarantee to make the same selections by default when

you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running “terraform plan” to see any changes required for your infrastructure. All Terraform commands should now work.

If you ever set or change modules or backend configuration for Terraform, rerun this command to reinitialize your working directory. If you forget, other commands will detect it and remind you to do so if necessary.

Once the provider has been gathered, we can invoke terraform plan to see what changes will occur in the infrastructure prior to applying the config. I’m using the reservable ACI sandbox from Cisco DevNet  for the backend infrastructure but you can use the Always-On sandbox or any other ACI simulator or hardware instance. Just be sure to change the target username, password, and url in the HCL configuration file.

Performing the plan action will output the changes that need to be made to the infrastructure, based on what Terraform currently knows about the infrastructure (which in this case is nothing, as Terraform has not applied any configuration yet). For our configuration, the following output will appear:

[I] theu-terraform » terraform plan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:

 + create

Terraform will perform the following actions:


# aci_tenant.demo_tenant will be created

+ resource "aci_tenant" "demo_tenant" {

+ annotation                    = "orchestrator:terraform"

+ description                   = "Demo tenant for the U"

+ id                            = (known after apply)

+ name                          = "TheU_Tenant"

+ name_alias                    = (known after apply)

+ relation_fv_rs_tenant_mon_pol = (known after apply)

}


Plan: 1 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────

Note: You didn't use the -out option to save this plan, so Terraform can't guarantee to take exactly these actions if

you run "terraform apply" now.

We can see that the items with a plus symbol (+) next to them are to be created, and they align with what we had in the configuration originally. Great!  Now we can apply this configuration. We perform this by using the terraform apply command. After invoking the command, we’ll be prompted if we want to create this change, and we’ll respond with “yes.”

[I] theu-terraform » terraform apply                                                      

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

following symbols:

  + create


Terraform will perform the following actions:


  # aci_tenant.demo_tenant will be created

  + resource "aci_tenant" "demo_tenant" {

      + annotation                    = "orchestrator:terraform"

      + description                   = "Demo tenant for the U"

      + id                            = (known after apply)

      + name                          = "TheU_Tenant"

      + name_alias                    = (known after apply)

      + relation_fv_rs_tenant_mon_pol = (known after apply)

    }


Plan: 1 to add, 0 to change, 0 to destroy.


Do you want to perform these actions?

  Terraform will perform the actions described above.

  Only 'yes' will be accepted to approve.


  Enter a value: yes


aci_tenant.demo_tenant: Creating...

aci_tenant.demo_tenant: Creation complete after 3s [id=uni/tn-TheU_Tenant]


Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

The configuration has now been applied to the fabric!  If you’d like to verify, log in to the fabric and click on the Tenants tab. You should see the newly created tenant.

Finally – if you’d like to delete the tenant the same way you created it, you don’t have to create any complex rollback configuration. Simply invoke terraform destroy from the command line. Terraform will verify the state that exists locally within your project aligns with what exists on the fabric; then it will indicate what will be removed. After a quick confirmation, you’ll see that the tenant is removed, and you can verify in the Tenants tab of the fabric.

[I] theu-terraform » terraform destroy                                                    

aci_tenant.demo_tenant: Refreshing state... [id=uni/tn-TheU_Tenant]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the

following symbols:

  - destroy


Terraform will perform the following actions:


  # aci_tenant.demo_tenant will be destroyed

  - resource "aci_tenant" "demo_tenant" {

      - annotation  = "orchestrator:terraform" -> null

      - description = "Demo tenant for the U" -> null

      - id          = "uni/tn-TheU_Tenant" -> null

      - name        = "TheU_Tenant" -> null

    }



Plan: 0 to add, 0 to change, 1 to destroy.


Do you really want to destroy all resources?

  Terraform will destroy all your managed infrastructure, as shown above.

  There is no undo. Only 'yes' will be accepted to confirm.


  Enter a value: yes


aci_tenant.demo_tenant: Destroying... [id=uni/tn-TheU_Tenant]

aci_tenant.demo_tenant: Destruction complete after 1s


Destroy complete! Resources: 1 destroyed.

Complete Infrastructure as Code lifecycle management with a single tool is pretty amazing, huh?

A bonus tip


Another tip regarding Terraform and HCL relates to the workflow section above. I described the use of curly braces to avoid the need to ensure whitespace is correct or tab width is uniform within the configuration file. This is generally a good thing, as we can focus on what we want to deploy rather than minutiae of the config. However, sometimes it helps when you format the configuration in a way that’s aligned and easier to read, even if it doesn’t affect the outcome of what is deployed.

In these instances, you can invoke terraform fmt within your project folder, and it will automatically format all Terraform HCL files into aligned and readable text. You can try this yourself by adding a tab or multiple spaces before an argument or maybe between the = sign within some of the HCL. Save the file, run the formatter, and then reopen the file to see the changes. Pretty neat, huh?

Source: cisco.com

Tuesday 6 June 2023

Understanding Application Aware Routing (AAR) in Cisco SD-WAN

One of the main features used in Cisco SD-WAN is Application Aware Routing (AAR). It is often advertised as an intelligent mechanism that automatically changes the routing path of applications, thanks to its active monitoring of WAN circuits to detect anomalies and brownout conditions.


Customers and engineers alike love to wield the power to steer the application traffic away from unhealthy circuits and broken paths. However, many may overlook the complex processes that work in the background to provide such a flexible instrument.

In this blog, we will discuss the nuts and bolts that make the promises of AAR a reality and the conditions that must be met for it to work effectively.

Setting the stage


To understand what AAR can and cannot do, it’s important to understand how it works and the underlying mechanisms running in unison to deliver its promises.

To begin, let’s first define what AAR entails and its accomplices:

Application Aware Routing (AAR) allows the solution to recognize applications and/or traffic flows and set preferred paths throughout the network to serve them appropriately according to their application requirements. AAR relies on Bidirectional Forwarding Detection (BFD) probes to track data path characteristics and liveliness so that data plane tunnels between Cisco SD-WAN edge devices can be established, monitored, and their statistics logged. It uses the collected information to determine the optimal paths through which data plane traffic is sent inside IPsec tunnels. These characteristics encompass packet loss, latency, and jitter.

The information above describes the relationship between AAR and BFD, but it’s crucial to note that they are separate mechanisms. AAR relies on the BFD daemon by polling its results to determine the preferred path configured,  based on the results of the BFD probes sent through each data plane tunnel.

It is a logical next step to explain how BFD works in SD-WAN as described in the Cisco SD-WAN Design Guide:

On Cisco WAN Edge routers, BFD is automatically started between peers and cannot be disabled. It runs between all WAN Edge routers in the topology encapsulated in the IPsec tunnels and across all transports. BFD operates in echo mode, which means when BFD packets are sent by a WAN Edge router, the receiving WAN Edge router returns them without processing them. Its purpose is to detect path liveliness and it can also perform quality measurements for application aware routing, like loss, latency, and jitter. BFD is used to detect both black-out and brown-out scenarios.

Searching for ‘the why’


Understanding the mechanism behind AAR is essential to comprehend its creation and purpose. Why are these measurements taken, and what do we hope to achieve from them? As Uncle Ben once said to Spider-Man, “With great power comes great responsibility.”

Abstraction power and transport independence require significant control and management. Every tunnel built requires a reliable underlay, making your overlay only as good as the underlay it uses.

Service Level Agreements (SLAs) are crucial for ensuring your underlay stays healthy and peachy, and your contracted services (circuits) are performing as expected. While SLAs are a legal agreement, they may not always be effective in ensuring providers fulfill their part of the bargain. In the end, it boils down to what you can demonstrate to ensure that providers keep their i’s dotted and their t’s crossed.

In SD-WAN, you can configure SLAs within the AAR policies to match your application’s requirements or your providers’ agreements.

Remember the averaged calculations I mentioned before? They will be compared against configured thresholds (SLAs) in the AAR policy. Anything not satisfying those SLAs will be flagged, logged, and won’t be used for AAR path selections.

Measure, measure, measure!


Having covered the what, who, and the often-overlooked why, it’s time to turn our attention to the how! ?

As noted previously, BFD measures link liveliness and quality. In other words, collecting, registering, and logging the resulting data. Once logged, the next step is to normalize and compare the data by subsequently averaging the measurements.

Now, how does SD-WAN calculate these average values? By default, quality measurements are collected and represented in buckets. Those buckets are then averaged over time. The default values consist of 6 buckets, also called poll intervals, with  each bucket being 10 minutes long, and each hello sent at 1000 msec intervals.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Putting it all together (by default):

◉ 6 buckets
◉ Each bucket is 10 minutes long
◉ One hello per second, or 1000 msec intervals
◉ 600 hellos are sent per bucket
◉ The average calculation is based on all buckets

Finding the sweet spot


It’s important to remember that these calculations are meant to be compared against the configured SLAs. As the result is a moving average, voltage drops or outages may not be considered by AAR immediately (but they might already be flagged by BFD). It takes around 3 poll intervals to motivate the removal of a certain transport locator (TLOC) from the AAR calculation, when using default values.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Can these values be tweaked for faster AAR decision making? Yes, but it will be a trade-off between stability and responsiveness. Modifying the buckets, multipliers (numbers of BFD hello packets), and frequency may be too aggressive for some circuits to meet their SLAs.

Let’s recall that these calculations are meant to be compared against SLAs configured.

Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco Guides, Cisco Tutorial and Materials, Cisco

Phew, who would have thought that magic can be so mathematically pleasing? ?

Source: cisco.com

Sunday 22 January 2023

Launch Your Cybersecurity Career with Cisco CyberOps Certifications | Part 1

Every day, organizations worldwide contend with increasing malicious activity by criminal organizations and nation-state sponsored threat actors. There is a tremendous demand for security professionals who are trained to defend against these malicious threats. These professionals are the backbone of effective security teams. 

When organizations build security teams to address sophisticated cyber threats, they typically begin by constructing a security operations center (SOC). Modern organizations rely on SOC teams to vigilantly monitor security systems, rapidly detect breaches, and quickly respond to and remediate security incidents. To succeed in these crucial tasks, SOCs are desperately seeking more qualified cybersecurity professionals.

Cisco CyberOps Certification Evolution


Cybersecurity Career, Cisco CyberOps Certifications, Cisco Certification, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Preparation
In 2016, Cisco introduced the Global Cybersecurity Scholarship program to help close this cybersecurity skills gap. Alongside an investment of $10 million in the program to increase the pool of talent with critical cybersecurity proficiency, Cisco also introduced a new CCNA CyberOps certification to prepare candidates to begin a career working with associate-level cybersecurity analysts within SOCs. At the time, candidates had to pass two exams (SECFND + SECOPS) to earn this valuable certification. 

In 2020, Cisco redesigned the certification requirements and introduced the one-exam CCNA certification. For example, to earn the CCNA CyberOps certification, candidates had to only pass the CBROPS exam. At the professional level, candidates still had to pass two exams: for CCNP CyberOps, those exams were and still are the CBRCOR core exam and the CBRFIR concentration exam. 

In 2022, with the release of the new Cisco U. digital learning experience, the SOC Tier 1 Analyst learning path was introduced. The Cisco U. digital learning experience is built around the learner and the SOC Tier 1 Analyst learning path is specifically designed to ready learners for the SOC environment. With targeted quick-start pre-skill assessments, modular learning that addressed various aspects of the SOC experience, advanced search to refresh skills and topics, and a focus on goal setting, Cisco U. is designed to work for everyone’s unique journey.   

Cisco SOC Tier 1 Analyst Learning Path 


The SOC Tier 1 analyst role is the entry-level position within the security operations center. The SOC Tier 1 analyst, or triage specialist, has sysadmin and scripting programming skills, as well as one or more relevant cybersecurity-related certifications, such as the Cisco Certified CyberOps Associate, Cisco Certified CyberOps Professional, or CCNA. To help grow the skills necessary to operate effectively as a SOC Tier 1 analyst, Cisco created the Security Operations Center (SOC) Tier 1 analyst Learning Path training. This learning path is a collection of courses designed to help learners master the concepts and tasks needed for the SOC Tier 1 analyst job role and functions as a roadmap, guiding learners and providing visibility into their mastery of necessary SOC analyst skills and concepts.  

The goal of Cisco’s SOC Tier 1 Analyst Learning Path training is to teach the fundamental skills required to begin a career working as an entry-level associate SOC analyst within a threat-centric security operations center.

Cybersecurity Career, Cisco CyberOps Certifications, Cisco Certification, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Preparation

The training explores common attack vectors, malicious activities, and patterns of suspicious behaviors typically encountered within a threat-centric security operation center. It includes videos, example scenarios, hands-on-labs, and knowledge assessments (review questions). As the student advances down the learning path, they will be exposed to the foundational concepts and practices behind a security operations center and will gain the tactical knowledge and skills that SOC teams require to effectively detect and respond to the growing numbers of cybersecurity threats.  

Note: The SOC Tier 1 Analyst Learning Path consists of the CBROPS course with some additional cyber security content, plus some CCNA Implementing and Administering Cisco Solutions 1.0 content. 

SOC Analyst Job Outlook 


According to the U.S. Bureau of Labor Statistics, employment of information security analysts is projected to grow 33 percent from 2020 to 2030, much faster than the average for all occupations.   

Cisco CyberOps certifications are designed to satisfy the actual needs of SOC teams. CCNA and CCNP certifications prepare individuals to pursue a career working as an analyst in the SOC and the different levels of certification are intended to develop the skills necessary for advancement.  Below is a recent Cisco job posting for a SOC Cyber Security Analyst opening with the job position overview and responsibilities. Successfully completing the Cisco CCNA/CCNP Cyber Ops certifications fulfills many of the job requirements.

Cybersecurity Career, Cisco CyberOps Certifications, Cisco Certification, Cisco Tutorial and Materials, Cisco Guides, Cisco Learning, Cisco Prep, Cisco Preparation

Source: cisco.com

Tuesday 2 August 2022

Exploring the Linux ‘ip’ Command

Cisco Exam, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation

I’ve been talking for several years now about how network engineers need to become comfortable with Linux. I generally position it that we don’t all need to become “big bushy beard-bearing sysadmins.” Rather, network engineers must be able to navigate and work with a Linux-based system confidently. I’m not going to go into all the reasons I believe that in this post (if you’d like a deeper exploration of that topic, please let me know). Nope… I want to dive into a specific skill that every network engineer should have: exploring the network configuration of a Linux system with the “ip” command.

A winding introduction with some psychology and an embarrassing fact (or two)

If you are like me and started your computing world on a Windows machine, maybe you are familiar with “ipconfig” on Windows. The “ipconfig” command provides details about the network configuration from the command line.

A long time ago, before Hank focused on network engineering and earned his CCNA for the first time, he used the “ipconfig” command quite regularly while supporting Windows desktop systems.

What was the IP assigned to the system? Was DHCP working correctly? What DNS servers are configured? What is the default gateway? How many interfaces are configured on the system? So many questions he’d use this command to answer. (He also occasionally started talking in the third person.)

It was a great part of my toolkit. I’m actually smiling in nostalgia as I type this paragraph.

For old times’ sake, I asked John Capobianco, one of my newest co-workers here at Cisco Learning & Certifications, to send me the output from “ipconfig /all” for the blog. John is a diehard Windows user still, while I converted to Mac many years ago. And here is the output of one of my favorite Windows commands (edited for some privacy info).

Windows IP Configuration

   Host Name . . . . . . . . . . . . : WINROCKS

   Primary Dns Suffix  . . . . . . . :

   Node Type . . . . . . . . . . . . : Hybrid

   IP Routing Enabled. . . . . . . . : No

   WINS Proxy Enabled. . . . . . . . : No

   DNS Suffix Search List. . . . . . : example.com

Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : home

   Description . . . . . . . . . . . : Intel(R) Ethernet Connection (12) I219-V

   Physical Address. . . . . . . . . : 24-4Q-FE-88-HH-XY

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

   Link-local IPv6 Address . . . . . : fe80::31fa:60u2:bc09:qq45%13(Preferred)

   IPv4 Address. . . . . . . . . . . : 192.168.122.36(Preferred)

   Subnet Mask . . . . . . . . . . . : 255.255.255.0

   Lease Obtained. . . . . . . . . . : July 22, 2022 8:30:42 AM

   Lease Expires . . . . . . . . . . : July 25, 2022 8:30:41 AM

   Default Gateway . . . . . . . . . : 192.168.2.1

   DHCP Server . . . . . . . . . . . : 192.168.2.1

   DHCPv6 IAID . . . . . . . . . . . : 203705342

   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-27-7B-B2-1D-24-4Q-FE-88-HH-XY

   DNS Servers . . . . . . . . . . . : 192.168.122.1

   NetBIOS over Tcpip. . . . . . . . : Enabled

Wireless LAN adapter Wi-Fi:

   Media State . . . . . . . . . . . : Media disconnected

   Connection-specific DNS Suffix  . : home

   Description . . . . . . . . . . . : Intel(R) Wi-Fi 6 AX200 160MHz

   Physical Address. . . . . . . . . : C8-E2-65-8U-ER-BZ

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

Ethernet adapter Bluetooth Network Connection:

   Media State . . . . . . . . . . . : Media disconnected

   Connection-specific DNS Suffix  . :

   Description . . . . . . . . . . . : Bluetooth Device (Personal Area Network)

   Physical Address. . . . . . . . . : C8-E2-65-A7-ER-Z8

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

It is still such a great and handy command. A few new things in there from when I was using it daily (IPv6, WiFi, Bluetooth), but it still looks like I remember.

The first time I had to touch and work on a Linux machine, I felt like I was on a new planet. Everything was different, and it was ALL command line. I’m not ashamed to admit that I was a little intimidated. But then I found the command “ifconfig,” and I began to breathe a little easier. The output didn’t look the same, but the command itself was close. The information it showed was easy enough to read. So, I gained a bit of confidence and knew, “I can do this.”

When I jumped onto the DevNet Expert CWS VM that I’m using for this blog to grab the output of the “ifconfig” command as an example, I was presented with this output.

(main) expert@expert-cws:~$ ifconfig

Command 'ifconfig' not found, but can be installed with:

apt install net-tools

Please ask your administrator.

This brings me to the point of this blog post. The “ifconfig” command is no longer the best command for viewing the network interface configuration in Linux. In fact, it hasn’t been the “best command” for a long time. Today the “ip” command is what we should be using.  I’ve known this for a while, but giving up something that made you feel comfortable and safe is hard. Just ask my 13-year-old son, who still sleeps with “Brown Dog,” the small stuffed puppy I gave him the day he was born. As for me, I resisted learning and moving to the “ip” command for far longer than I should have.

Eventually, I realized that I needed to get with the times. I started using the “ip” command on Linux. You know what, it is a really nice command. The “ip” command is far more powerful than “ifconfig.”

When I found myself thinking about a topic for a blog post, I figured there might be another engineer or two out there who might appreciate a personal introduction to the “ip” command from Hank.

But before we dive in, I can’t leave a cliffhanger like that on the “ifconfig” command.

root@expert-cws:~# apt-get install net-tools

(main) expert@expert-cws:~$ ifconfig

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

        ether 02:42:9a:0c:8a:ee  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 172.16.211.128  netmask 255.255.255.0  broadcast 172.16.211.255

        inet6 fe80::20c:29ff:fe75:9927  prefixlen 64  scopeid 0x20

        ether 00:0c:29:75:99:27  txqueuelen 1000  (Ethernet)

        RX packets 85468  bytes 123667981 (123.6 MB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 27819  bytes 3082651 (3.0 MB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10

        loop  txqueuelen 1000  (Local Loopback)

        RX packets 4440  bytes 2104825 (2.1 MB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 4440  bytes 2104825 (2.1 MB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

There it is, the command that made me feel a little better when I started working with Linux.

Exploring the IP configuration of your Linux host with the “ip” command!

So there you are, a network engineer sitting at the console of a Linux workstation, and you need to explore or change the network configuration. Let’s walk through a bit of “networking 101” with the “ip” command.

First up, let’s see what happens when we just run “ip.”

(main) expert@expert-cws:~$ ip

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }

       ip [ -force ] -batch filename

where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |

                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |

                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |

                   vrf | sr | nexthop }

       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |

                    -h[uman-readable] | -iec | -j[son] | -p[retty] |

                    -f[amily] { inet | inet6 | mpls | bridge | link } |

                    -4 | -6 | -I | -D | -M | -B | -0 |

                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |

                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |

                    -rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |

                    -c[olor]}

There’s some interesting info just in this help/usage message. It looks like “ip” requires an OBJECT on which a COMMAND is executed. And the possible objects include several that jump out at the network engineer inside of me.

◉ link – I’m curious what “link” means in this context, but it catches my eye for sure

◉ address – This is really promising. The ip “addresses” assigned to a host is high on the list of things I know I’ll want to understand.

◉ route – I wasn’t fully expecting “route” to be listed here if I’m thinking in terms of the “ipconfig” or “ifconfig” command. But the routes configured on a host is something I’ll be interested in.

◉ neigh – Neighbors? What kind of neighbors?

◉ tunnel – Oooo… tunnel interfaces are definitely interesting to see here.

◉ maddress, mroute, mrule – My initial thought when I saw “maddress” was “MAC address,” but then I looked at the next two objects and thought maybe it’s “multicast address.” We’ll leave “multicast” for another blog post.

The other objects in the list are interesting to see. Having “netconf” in the list was a happy surprise for me. But for this blog post, we’ll stick with the basic objects of link, address, route, and neigh.

Where in the network are we? Exploring “ip address”

First up in our exploration will be the “ip address” object. Rather than just go through the full command help or man page line (ensuring no one ever reads another post of mine), I’m going to look at some common things I might want to know about the network configuration on a host. As you are exploring on your own, I would highly recommend exploring “ip address help” as well as “man ip address” for more details.  These commands are very powerful and flexible.

What is my IP address?

(main) expert@expert-cws:~$ ip address show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

    inet 172.16.211.128/24 brd 172.16.211.255 scope global dynamic ens160

       valid_lft 1344sec preferred_lft 1344sec

    inet6 fe80::20c:29ff:fe75:9927/64 scope link 

       valid_lft forever preferred_lft forever

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 

    link/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff

    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

       valid_lft forever preferred_lft forever

Running “ip address show” will display the address configuration for all interfaces on the Linux workstation. My workstation has 3 interfaces configured, a loopback address, the ethernet interface, and docker interface. Some of the Linux hosts I work on have dozens of interfaces, particularly if the host happens to be running lots of Docker containers as each container generates network interfaces. I plan to dive into Docker networking in future blog posts, so we’ll leave the “docker0” interface alone for now.

We can focus our exploration by providing a specific network device name as part of our command.

(main) expert@expert-cws:~$ ip add show dev ens160

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

    inet 172.16.211.128/24 brd 172.16.211.255 scope global dynamic ens160

       valid_lft 1740sec preferred_lft 1740sec

    inet6 fe80::20c:29ff:fe75:9927/64 scope link 

       valid_lft forever preferred_lft forever

Okay, that’s really what I was interested in looking at when I wanted to know what my IP address was. But there is a lot more info in that output than just the IP address. For a long time, I just skimmed over the output. I would ignore most output and simply look at the address and for state info like “UP” or “DOWN.” Eventually, I wanted to know what all that output meant, so in case you’re interested in how to decode the output above…

  • Physical interface details
    • “ens160” – The name of the interface from the operating system’s perspective.  This depends a lot on the specific distribution of Linux you are running, whether it is a virtual or physical machine, and the type of interface.  If you’re more used to seeing “eth0” interface names (like I was) it is time to become comfortable with the new interface naming scheme.
    • “<BROADCAST,MULTICAST,UP,LOWER_UP>” – Between the angle brackets are a series of flags that provide details about the interface state.  This shows that my interface is both broadcast and multicast capable and that the interface is enabled (UP) and that the physical layer is connected (LOWER_UP)
    • “mtu 1500” – The maximum transmission unit (MTU) for the interface.  This interface is configured for the default 1500 bytes
    • “qdisc mq” – This indicates the queueing approach being used by the interface.  Things to look for here are values of “noqueue” (send immediately) or “noop” (drop all). There are several other options for queuing a system might be running.
    • “state UP”- Another indication of the operational state of an interface.  “UP” and “DOWN” are pretty clear, but you might also see “UNKNOWN” like in the loopback interface above.  “UNKNOWN” indicates that the interface is up and operational, but nothing is connected.  Which is pretty valid for a loopback address.
    • “group default” – Interfaces can be grouped together on Linux to allow common attributes or commands.  Having all interfaces connected to “group default” is the most common setup, but there are some handy things you can do if you group interfaces together.  For example, imagine a VM host system with 2 interfaces for management and 8 for data traffic.  You could group them into “mgmt” and “data” groups and then control all interfaces of a type together.
    • “qlen 1000” – The interface has a 1000 packet queue.  The 1001st packet would be dropped.
  • “link/ether” – The layer 2 address (MAC address) of the interface
  • “inet” – The IPv4 interface configuration
    • “scope global” – This address is globally reachable. Other options include link and host
    • “dynamic” – This IP address was assigned by DHCP.  The lease length is listed in the next line under “valid_lft”
    • “ens160” – A reference back to the interface this IP address is associated with
  • “inet6” – The IPv6 interface configuration.  Only the link local address is configured on the host.  This shows that while IPv6 is enabled, the network doesn’t look to have it configured more widely

Network engineers link the world together one device at a time. Exploring the “ip link” command.

Now that we’ve gotten our feet wet, let’s circle back to the “link” object. The output of “ip address show” command gave a bit of a hint at what “link” is referring to. “Links” are the network devices configured on a host, and the “ip link” command provides engineers options for exploring and managing these devices.

What networking interfaces are configured on my host?

(main) expert@expert-cws:~$ ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 

    link/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff

After exploring the output of “ip address show,” it shouldn’t come as a surprise that there are 3 network interfaces/devices configured on my host.  And a quick look will show the output from this command is all included in the output for “ip address show.”  For this reason, I almost always just use “ip address show” when looking to explore the network state of a host.

However, the “ip link” object is quite useful when you are looking to configure new interfaces on a host or change the configuration on an existing interface. For example, “ip link set” can change the MTU on an interface.

root@expert-cws:~# ip link set ens160 mtu 9000

root@expert-cws:~# ip link show dev ens160

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

Note 1: Changing network configuration settings requires administrative or “root” privileges.

Note 2: The changes made using the “set” command on an object are typically NOT maintained across system or service restarts. This is the equivalent of changing the “running-configuration” of a network device. In order to change the “startup-configuration” you need to edit the network configuration files for the Linux host.  Check the details for network configuration for your distribution of Linux (ie Ubuntu, RedHat, Debian, Raspbian, etc.)

Is anyone else out there? Exploring the “ip neigh” command

Networks are most useful when other devices are connected and reachable through the network. The “ip neigh” command gives engineers a view at the other hosts connected to the same network. Specifically, it offers a look at, and control of, the ARP table for the host.

Do I have an ARP entry for the host that I’m having trouble connecting to?

A common problem network engineers are called on to support is when one host can’t talk to another host.  If I had a nickel for every help desk ticket I’ve worked on like this one, I’d have an awful lot of nickels. Suppose my attempts to ping a host on my same local network with IP address 172.16.211.30 are failing. The first step I might take would be to see if I’ve been able to learn an ARP entry for this host.

(main) expert@expert-cws:~$ ping 172.16.211.30

PING 172.16.211.30 (172.16.211.30) 56(84) bytes of data.

^C

--- 172.16.211.30 ping statistics ---

3 packets transmitted, 0 received, 100% packet loss, time 2039ms

(main) expert@expert-cws:~$ ip neigh show

172.16.211.30 dev ens160  FAILED

172.16.211.254 dev ens160 lladdr 00:50:56:f0:11:04 STALE

172.16.211.2 dev ens160 lladdr 00:50:56:e1:f7:8a STALE

172.16.211.1 dev ens160 lladdr 8a:66:5a:b5:3f:65 REACHABLE

And the answer is no. The attempt to ARP for 172.16.211.30 “FAILED.”  However, I can see that ARP in general is working on my network, as I have other “REACHABLE” addresses in the table.

Another common use of the “ip neigh” command involves clearing out an ARP entry after changing the IP address configuration of another host (or hosts). For example, if you replace the router on a network, a host won’t be able to communicate with it until the old ARP entry ages out and the system tries ARPing again for a new address. Depending on the operating system, this can take minutes — which can feel like years when waiting for a system to start responding again. The “ip neigh flush” command can clear an entry from the table immediately.

How do I get from here to there? Exploring the “ip route” command

Most of the traffic from a host is destined somewhere on another layer 3 network, and the host needs to know how to “route” that traffic correctly. After looking at the IP address(es) configured on a host, I will often take a look at the routing table to see if it looks like I’d expect. For that, the “ip route” command is the first place I look.

What routes does this host have configured?

(main) expert@expert-cws:~$ ip route show

default via 172.16.211.2 dev ens160 proto dhcp src 172.16.211.128 metric 100 

10.233.44.0/23 via 172.16.211.130 dev ens160 

172.16.211.0/24 dev ens160 proto kernel scope link src 172.16.211.128 

172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 

It may not look exactly like the output of “show ip route” on a router, but this command provides very usable output.

◉ My default gateway is 172.16.211.2 through the “ens160” device.  This route was learned from DHCP and will use the IP address configured on my “ens160” interface.

◉ There is a static route configured to network 10.233.44.0/23 through address 172.16.211.130

◉ And there are 2 routes that were added by the kernel for the local network of the two configured IP addresses on the interfaces.  But the “docker0” route shows “linkdown” — matching the state of the “docker0” interface we saw earlier.

The “ip route” command can also be used to add or delete routes from the table, but with the same notes as when we used “ip link” to change the MTU of an interface. You’ll need admin rights to run the command, and any changes made will not be maintained after a restart. But this can still be very handy when troubleshooting or working in the lab.

And done… or am I?

So that’s is my “brief” look at the “ip” command for Linux. Oh wait, that bad pun attempt reminded me of one more tip I meant to include. There is a “–brief” option you can add to any of the commands that reformats the data in a nice table that is often quite handy. Here are a few examples.

(main) expert@expert-cws:~$ ip --brief address show

lo               UNKNOWN        127.0.0.1/8 ::1/128 

ens160           UP             172.16.211.128/24 fe80::20c:29ff:fe75:9927/64 

docker0          DOWN           172.17.0.1/16 

(main) expert@expert-cws:~$ ip --brief link show

lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 

ens160           UP             00:0c:29:75:99:27 <BROADCAST,MULTICAST,UP,LOWER_UP> 

docker0          DOWN           02:42:9a:0c:8a:ee <NO-CARRIER,BROADCAST,MULTICAST,UP> 

Not all commands have a “brief” output version, but several do, and they are worth checking out.

There is quite a bit more I could go into on how you can use the “ip” command as part of your Linux network administration skillset. (Checkout the “–json” flag for another great option). But at 3,000+ words on this post, I’m going to call it done for today. If you’re interested in a deeper look at Linux networking skills like this, let me know, and I’ll come back for some follow-ups. 

Source: cisco.com

Sunday 1 May 2022

ChatOps: How to Secure Your Webex Bot

This is the second blog in our series about writing software for ChatOps. In the first post of this ChatOps series, we built a Webex bot that received and logged messages to its running console. In this post, we’ll walk through how to secure your Webex bot with authentication and authorization. Securing a Webex bot in this way will allow us to feel more confident in our deployment as we move on to adding more complex features.

[Access the complete code for this post on GitHub here.]

Very important: This post picks up right where the first blog in this ChatOps series left off. Be sure to read the first post of our ChatOps series to learn how to make your local development environment publicly accessible so that Webex webhook events can reach your API. Make sure your tunnel is up and running and webhook events can flow through to your API successfully before proceeding on to the next section. From here on out, this post assumes that you’ve taken those steps and have a successful end-to-end data flow. [You can find the code from the first post on how to build a Webex bot here.]

How to secure your Webex bot with an authentication check

Webex employs HMAC-SHA1 encryption based on a secret key that you can provide, to add security to your service endpoint. For the purposes of this blog post, I’ll include that in the web service code as an Express middleware function, which will be applied to all routes. This way, it will be checked before any other route handler is called. In your environment, you might add this to your API gateway (or whatever is powering your environment’s ingress, e.g. Nginx, or an OPA policy).

How to add a secret to the Webhook

Use your preferred tool to generate a random, unique, and complex string. Make sure that it is long and complex enough to be difficult to guess. There are plenty of tools available to create a key. Since I’m on a Mac, I used the following command:

$ cat /dev/urandom | base64 | tr -dc '0-9a-zA-Z' | head -c30

The resulting string was printed into my Shell window. Be sure to hold onto it. You’ll use it in a few places in the next few steps.

Now you can use that string to update your Webhook with a PUT request. You can also add it to a new Webhook if you’d like to DELETE your old one:

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Webex will now send an additional header with each notification request under the header key x-spark-spark-signature. The header value will be a one-way encryption of the POST body, done with the secret value that you provided. On the server side, we can attempt the same one-way encryption. If the API client sending the POST request (ideally Webex) used the same encryption secret that we used, then our resulting string should match the x-spark-spark-signature header value.

How to add an application configuration

Now that things are starting to get more complex, let’s build out an application along the lines of what we can expect to see in the real world. First, we create a simple (but extensible) AppConfig class in config/appConfig.js. We’ll use this to pull in environment variables and then reference those values in other parts of our code. For now, it’ll just include the three variables needed to power authentication:

◉ the secret that we added to our Webhook
◉ the header key where we’ll examine the encrypted value of the POST body
◉ the encryption algorithm used, which in this case is ”sha1”

Here’s the code for the AppConfig class, which we’ll add as our code gets more complex:

// in config/appConfig.js
import process from 'process';

export class AppConfig {
    constructor() {
        this.encryptionSecret = process.env['WEBEX_ENCRYPTION_SECRET'];
        this.encryptionAlgorithm = process.env['WEBEX_ENCRYPTION_ALGO'];
        this.encryptionHeader = process.env['WEBEX_ENCRYPTION_HEADER'];
    }
}

Super important: Be sure to populate these environment variables in your development environment. Skipping this step can lead to a few minutes of frustration before remembering to populate these values.

Now we can create an Auth service class that will expose a method to run our encrypted string comparison:

// in services/Auth.js
import crypto from "crypto";

export class Auth {
    constructor(appConfig) {
        this.encryptionSecret = appConfig.encryptionSecret;
        this.encryptionAlgorithm = appConfig.encryptionAlgorithm;
    }

    isProperlyEncrypted(signedValue, messsageBody) {
        // create an encryption stream
        const hmac = crypto.createHmac(this.encryptionAlgorithm, 
this.encryptionSecret);
        // write the POST body into the encryption stream
        hmac.write(JSON.stringify(messsageBody));
        // close the stream to make its resulting string readable
        hmac.end();
        // read the encrypted value
        const hash = hmac.read().toString('hex');
        // compare the freshly encrypted value to the POST header value,
        // and return the result
        return hash === signedValue;
    }
}

Pretty straightforward, right? Now we need to leverage this method in a router middleware that will check all incoming requests for authentication. If the authentication check doesn’t pass, the service will return a 401 and respond immediately. I do this in a new file called routes/auth.js:

// in routes/auth.js
import express from 'express'
import {AppConfig} from '../config/AppConfig.js';
import {Auth} from "../services/Auth.js";

const router = express.Router();
const config = new AppConfig();
const auth = new Auth(config);

router.all('/*', async (req, res, next) => {
    // a convenience reference to the POST body
    const messageBody = req.body;
    // a convenience reference to the encrypted string, with a fallback if the value isn’t set
    const signedValue = req.headers[config.encryptionHeader] || "";
    // call the authentication check
    const isProperlyEncrypted = auth.isProperlyEncrypted(signedValue, messageBody);
    if(!isProperlyEncrypted) {
        res.statusCode = 401;
        res.send("Access denied");
    }

    next();
});

export default router;

All that’s left to do is to add this router into the Express application, just before the handler that we defined earlier. Failing the authentication check will end the request’s flow through the service logic before it ever gets to any other route handlers. If the check does pass, then the request can continue on to the next route handler:

// in app.js

import express from 'express';
import logger from 'morgan';
// ***ADD THE AUTH ROUTER IMPORT***
import authRouter from './routes/auth.js';
import indexRouter from './routes/index.js';

// skipping some of the boilerplate…

// ***ADD THE AUTH ROUTER TO THE APP***
app.use(authRouter);

app.use('/', indexRouter);

// the rest of the file stays the same…

Now if you run your server again, you can test out your authentication check. You can try with just a simple POST from a local cURL or Postman request. Here’s a cURL command that I used to test it against my local service:

$ curl --location --request POST 'localhost:3000' \
--header 'x-spark-signature: incorrect-value' \
--header 'Content-Type: application/json' \
--data-raw '{
    "key": "value"
}'

Running that same request in Postman produces the following output:

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Now, if you send a message to your bot through Webex, you should see the Webhook event flow through your authentication check and into the route handler that we created in the first post.

How to add optional authorization


At this point, we can rest assured that any request that comes through came from Webex. But that doesn’t mean we’re done with security! We might want to restrict which users in Webex can call our bot by mentioning it in a Webex Room. If that’s the case, we need to add an authorization check as well.

How to check against a list of authorized users

Webex sends user information with each event notification, indicating the Webex user ID and the corresponding email address of the person who triggered the event (an example is displayed in the first post in this series). In the case of a message creation event, this is the person who wrote the message about which our web service is notified. There are dozens of ways to check for authorization – AD groups, AWS Cognito integrations, etc.

For simplicity’s sake, in this demo service, I’m just using a hard-coded list of approved email addresses that I’ve added to the Auth service constructor, and a simple public method to check the email address that Webex provided in the POST body against that hard-coded list. Other, more complicated modes of authz checks are beyond the scope of this post.

// in services/Auth.js
export class Auth {
    constructor(appConfig) {
        this.encryptionSecret = appConfig.encryptionSecret;
        this.encryptionAlgorithm = appConfig.encryptionAlgorithm;
        // ADDING AUTHORIZED USERS
        this.authorizedUsers = [
           "colacy@cisco.com" // hey, that’s me!
        ];
    }

    // ADDING AUTHZ CHECK METHOD
    isUserAuthorized(messageBody) {
        return this.authorizedUsers.indexOf(messageBody.data.personEmail) 
!== -1
    }
// the rest of the class is unchanged

Just like with the authentication check, we need to add this to our routes/auth.js handler. We’ll add this between the authentication check and the next() call that completes the route handler.

// in routes/auth.js
// …

    const isProperlyEncrypted = auth.isProperlyEncrypted(signedValue, messageBody);
    if(!isProperlyEncrypted) {
        res.statusCode = 401;
        res.send("Access denied");
        return;
    }

    // ADD THE AUTHORIZATION CHECK
    const isAuthorized = auth.isUserAuthorized(messageBody);
    if(!isAuthorized) {
        res.statusCode = 403;
        res.send("Unauthorized");
        return;
    }

    next();
// …

If the sender’s email address isn’t in that list, the bot will send a 403 back to the API client with a message that the user was unauthorized. But that doesn’t really let the user know what went wrong, does it?

User Feedback


If the user is unauthorized, we should let them know so that they aren’t under the incorrect assumption that their request was successful — or worse, wondering why nothing happened. In this situation, the only way to provide the user with that feedback is to respond in the Webex Room where they posted their message to the bot.

Creating messages on Webex is done with POST requests to the Webex API. [The documentation and the data schema can be found here.] Remember, the bot authenticates with the access token that was provided back when we created it in the first post. We’ll need to pass that in as a new environment variable into our AppConfig class:

// in config/AppConfig.js
export class AppConfig {
    constructor() {
        // ADD THE BOT'S TOKEN
        this.botToken = process.env['WEBEX_BOT_TOKEN'];
        this.encryptionSecret = process.env['WEBEX_ENCRYPTION_SECRET'];
        this.encryptionAlgorithm = process.env['WEBEX_ENCRYPTION_ALGO'];
        this.encryptionHeader = process.env['WEBEX_ENCRYPTION_HEADER'];
    }
}

Now we can start a new service class, WebexNotifications, in a new file called services/WebexNotifications.js, which will notify our users of what’s happening in the backend.

// in services/WebexNotifications.js

export class WebexNotifications {
    constructor(appConfig) {
        this.botToken = appConfig.botToken;
    }

    // new methods to go here

}

This class is pretty sparse. For the purposes of this demo, we’ll keep it that way. We just need to give our users feedback based on whether or not their request was successful. That can be done with a single method, implemented in our two routers; one to indicate authorization failures and the other to indicate successful end-to-end messaging.

A note on the code below: To stay future-proof, Iʼm using the NodeJS version 17.7, which has fetch enabled using the execution flag –experimental-fetch. If you have an older version of NodeJS, you can use a third-party HTTP request library, like axios, and use that in place of any lines where you see fetch used.

Weʼll start by implementing the sendNotification method, which will take the same messageBody object that weʼre using for our auth checks:

// in services/WebexNotifications.js…
// inside the WebexNotifications.js class

   async sendNotification(messageBody, success=false) {
       // we'll start a response by tagging the person who created the message
       let responseToUser = 
`<@personEmail:${messageBody.data.personEmail}>`;
       // determine if the notification is being sent due to a successful or failed authz check
       if (success === false) {
           responseToUser += ` Uh oh! You're not authorized to make requests.`;
       } else {
           responseToUser += ` Thanks for your message!`;
       }
       // send a message creation request on behalf of the bot
       const res = await fetch("https://webexapis.com/v1/messages", {
           headers: {
               "Content-Type": "application/json",
               "Authorization": `Bearer ${this.botToken}`
           },
           method: "POST",
           body: JSON.stringify({
               roomId: messageBody.data.roomId,
               markdown: responseToUser
           })
       });
       return res.json();
   }

Now it’s just a matter of calling this method from within our route handlers. In routes/auth.js we’ll call it in the event of an authorization failure:

// in routes/auth.js

import express from 'express'
import {AppConfig} from '../config/AppConfig.js';
import {Auth} from "../services/Auth.js";
// ADD THE WEBEXNOTIFICATIONS IMPORT
import {WebexNotifications} from '../services/WebexNotifications.js';

// …

const auth = new Auth(config);
// ADD CLASS INSTANTIATION
const webex = new WebexNotifications(config);

// ...

    if(!isAuthorized) {
        res.statusCode = 403;
        res.send("Unauthorized");
        // ADD THE FAILURE NOTIFICATION
        await webex.sendNotification(messageBody, false);
        return;
    }
// ...

Similarly, we’ll add the success version of this method call to routes/index.js. Here’s the final version of routes/index.js once we’ve added a few more lines like we did in the auth route:

// in routes/index.js

import express from 'express'
// Add the AppConfig import
import {AppConfig} from '../config/AppConfig.js';
// Add the WebexNotifications import
import {WebexNotifications} from '../services/WebexNotifications.js';

const router = express.Router();
// instantiate the AppConfig
const config = new AppConfig();
// instantiate the WebexNotification class, passing in our app config
const webex = new WebexNotifications(config);

router.post('/', async function(req, res) {
  console.log(`Received a POST`, req.body);
  res.statusCode = 201;
  res.end();
  await webex.sendNotification(req.body, true);
});

export default router;

To test this out, I’ll simply comment out my own email address from the approved list and then send a message to my bot. As you can see from the screenshot below, Webex will display the notification from the code above, indicating that I’m not allowed to make the request.

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

If I un-comment my email address, the request goes through successfully.

Cisco ChatOps, Cisco Certification, Cisco Learning, Cisco Prep, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco

Source: cisco.com