Saturday, 6 August 2022

Latest Innovations in Cisco DNA Software for Switching

Cisco continues to deliver on its promise of innovation in our Cisco DNA software for Switching subscription. By deploying the latest innovations in Cisco DNA software for Switching along with Cisco DNA Center, you can unlock the full power of your Catalyst switches in a user-friendly way. It’s no question that Cisco DNA Center is the most powerful management platform for your Catalyst devices over any third-party network management system.

What’s new?

ThousandEyes integration (Application assurance): Cisco DNA Center can provide visibility into how your applications are performing, which is improved as a result of the out-of-the-box integration with ThousandEyes (TE). TE agents are included in Cisco DNA Software subscriptions at the Advantage level in specific models, they just need to be deployed out to your switches. You can see applications that TE agents are monitoring in the dashboard and get a performance summary (loss, latency, jitter) with the ability to drill down further. Not only does TE provide insight into your internal network, but also service providers.

Cisco DNA Software, Cisco Tutorial and Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning
Figure 1: ThousandEyes integration in Cisco DNA Center

Client Health: This feature allows you to quickly and efficiently understand how well the network is supporting end-users. The impact of any issues can be minimized for end users as well as IT staff in terms of issue resolution. You have the ability to drill down and search for specific users and get a 360 view of the health of their devices to pinpoint any downtimes.

Cisco DNA Software, Cisco Tutorial and Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning
Figure 2: Client 360 in Cisco DNA Center

PoE analytics: As people return to the office, it is important to be able to understand the power in remote offices. PoE analytics will allow IT to troubleshoot issues by looking at key attributes of PoE. For example, if a device is pulling more power, it is usually an indication that it may break. Action can be taken to disable specific ports or even power cycle ports.

Cisco DNA Software, Cisco Tutorial and Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning
Figure 3: PoE Analytics

Group Policy with ISE: The integration of Cisco DNA Center and ISE to control policy on a Cisco network provides a level of security that is unmatched in the industry. You can visualize what’s going on in your network and what devices and servers are communicating with each other. This allows you to make corrections as needed and ultimately prevent any security breaches.

Cisco DNA Software, Cisco Tutorial and Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning
Figure 4: Cisco DNA Center integration with ISE

Cisco DNA Spaces for Smart Buildings: Cisco DNA Spaces, a cloud-based data platform for IoT devices, gives smart building managers an all-encompassing view of operations and power consumption of smart lighting and shades, conference room availability, and cleaning frequency, and asset location, to name a few. Cisco DNA Spaces entitlement for Smart Buildings (See and Extend) is included in Cisco DNA Advantage licenses for Cisco Catalyst 9300 and 9400 Series Switches.

Cisco DNA Software, Cisco Tutorial and Materials, Cisco Guides, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning
Figure 5: Cisco DNA Spaces

How can I get these features and more?


If you already have a Cisco DNA Advantage subscription in Switching along with Cisco DNA Center, you will get to utilize these features at no additional cost to you.

If you do not have a Cisco DNA Advantage subscription or if you have a Cisco DNA Essentials subscription, the time to upgrade is now. We will continue to innovate and add more wireless features to our advantage tier.

Cisco is expanding the deployment options of Cisco DNA Center to provide greater operational flexibility and choice.


Cisco DNA Center is currently installed on a dedicated appliance. However, we recently announced at Cisco Live a new option for Cisco DNA Center customers, the Cisco DNA Center Virtual Appliance. The virtual appliance which is targeted for general availability next year will give customers new deployment options for a network controller to deploy in a public cloud on AWS or on VMware ESXi within a company data center or in a private cloud.

Source: cisco.com

Thursday, 4 August 2022

Stop DDoS at the 5G Network Edge

The increase in bandwidth demand and access to engaging online content has led to a rapid expansion of 5G technology deployments. This combination of increased demand from a multitude of user equipment devices (laptops, mobile phones, tablets) and rapid technology deployment has created a diverse threat surface potentially affecting the availability and sustainability of desired low latency outcomes (virtual reality, IoT, online gaming, etc.). One of the newer threats is an attack from rogue or BoT-controlled IoT and user equipment devices designed to flood the network with diverse flows at the access layer, potentially exposing the entire network to a much larger DDoS attack.

With the new Cisco Secure DDoS Edge Protection solution, communication service providers (CSPs) now have an efficient DDoS detection and mitigation solution that can thwart attacks right at the access layer. The solution focuses on 5G deployments, providing an efficient attack detection and mitigation solution for GPRS Tunneling Protocol (GTP) traffic. This will help prevent malicious traffic from penetrating deeper into a CSP network. To achieve the quality of experience (QoE) targets that customers demand in 5G networks, architectures should include the following features:

◉ Remove access level anomalies at the cell site router (CSR) to preserve QoE for users accessing 5G applications

◉ Remediate user equipment anomalies on the ingress port of the CSR to remove overages in backhaul resources like microwave backhaul

◉ Automate both east-west and north-south attack life cycles to remove collateral damage on the network and to preserve application service level agreements for customers

Cisco Certifications, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Guides
Figure 1. DDoS attack protection at the 5G network edge

The Cisco Secure DDoS Edge Protection solution offers the ability to detect and mitigate the threats as close to the source as possible – the edge. It features a docker container (detector) integrated into IOS XR and a centralized controller. The system is also air gapped and requires no connectivity outside of the CSP network to operate. The controller performs lifecycle management of the detector, orchestration of detectors across multiple CSRs, and aggregation of telemetry and policy across the network. Having the container integrated into IOS XR allows services to be pushed to the edge to meet availability and QoE requirements for 5G services, while the controller provides a central nervous system for delivering secure outcomes for 5G. Important threats addressed by the Cisco Secure DDoS Edge Protection solution include IoT Botnets, DNS attacks, burst attacks, layer 7 application attacks, attacks inside of GTP tunnels, and reflection and amplification attacks.

Cisco Certifications, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco News, Cisco Guides
Figure 2. Edge protection solution on the Cisco Network Convergence System (NCS) 540

Moving the DDoS attack detection and mitigation agent to the CSR helps speed up the attack response and can lower overall latency. Additionally, efficiency enhancements have been made to the solution in the following ways:

◉ GTP flows are first extracted at the ASIC layer using user-defined filters (UDFs) in IOS XR before they are sampled for NetFlow. This allows more attack bandwidth protection with the same sampling rate.
◉ Tunnel endpoint Identifiers (TEIDs) of GTP flows are extracted and included in the NetFlow data.
◉ Extracted NetFlow data is exported to the detector on the router and formatted using Google Protocol buffers.

Given that the NetFlow data doesn’t need to be exported to a centralized entity and is consumed locally on the router, faster attack detection and mitigation is possible.

Source: cisco.com

Tuesday, 2 August 2022

Exploring the Linux ‘ip’ Command

Cisco Exam, Cisco Certification, Cisco Exam Prep, Cisco Tutorial and Material, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Preparation

I’ve been talking for several years now about how network engineers need to become comfortable with Linux. I generally position it that we don’t all need to become “big bushy beard-bearing sysadmins.” Rather, network engineers must be able to navigate and work with a Linux-based system confidently. I’m not going to go into all the reasons I believe that in this post (if you’d like a deeper exploration of that topic, please let me know). Nope… I want to dive into a specific skill that every network engineer should have: exploring the network configuration of a Linux system with the “ip” command.

A winding introduction with some psychology and an embarrassing fact (or two)

If you are like me and started your computing world on a Windows machine, maybe you are familiar with “ipconfig” on Windows. The “ipconfig” command provides details about the network configuration from the command line.

A long time ago, before Hank focused on network engineering and earned his CCNA for the first time, he used the “ipconfig” command quite regularly while supporting Windows desktop systems.

What was the IP assigned to the system? Was DHCP working correctly? What DNS servers are configured? What is the default gateway? How many interfaces are configured on the system? So many questions he’d use this command to answer. (He also occasionally started talking in the third person.)

It was a great part of my toolkit. I’m actually smiling in nostalgia as I type this paragraph.

For old times’ sake, I asked John Capobianco, one of my newest co-workers here at Cisco Learning & Certifications, to send me the output from “ipconfig /all” for the blog. John is a diehard Windows user still, while I converted to Mac many years ago. And here is the output of one of my favorite Windows commands (edited for some privacy info).

Windows IP Configuration

   Host Name . . . . . . . . . . . . : WINROCKS

   Primary Dns Suffix  . . . . . . . :

   Node Type . . . . . . . . . . . . : Hybrid

   IP Routing Enabled. . . . . . . . : No

   WINS Proxy Enabled. . . . . . . . : No

   DNS Suffix Search List. . . . . . : example.com

Ethernet adapter Ethernet:

   Connection-specific DNS Suffix  . : home

   Description . . . . . . . . . . . : Intel(R) Ethernet Connection (12) I219-V

   Physical Address. . . . . . . . . : 24-4Q-FE-88-HH-XY

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

   Link-local IPv6 Address . . . . . : fe80::31fa:60u2:bc09:qq45%13(Preferred)

   IPv4 Address. . . . . . . . . . . : 192.168.122.36(Preferred)

   Subnet Mask . . . . . . . . . . . : 255.255.255.0

   Lease Obtained. . . . . . . . . . : July 22, 2022 8:30:42 AM

   Lease Expires . . . . . . . . . . : July 25, 2022 8:30:41 AM

   Default Gateway . . . . . . . . . : 192.168.2.1

   DHCP Server . . . . . . . . . . . : 192.168.2.1

   DHCPv6 IAID . . . . . . . . . . . : 203705342

   DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-27-7B-B2-1D-24-4Q-FE-88-HH-XY

   DNS Servers . . . . . . . . . . . : 192.168.122.1

   NetBIOS over Tcpip. . . . . . . . : Enabled

Wireless LAN adapter Wi-Fi:

   Media State . . . . . . . . . . . : Media disconnected

   Connection-specific DNS Suffix  . : home

   Description . . . . . . . . . . . : Intel(R) Wi-Fi 6 AX200 160MHz

   Physical Address. . . . . . . . . : C8-E2-65-8U-ER-BZ

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

Ethernet adapter Bluetooth Network Connection:

   Media State . . . . . . . . . . . : Media disconnected

   Connection-specific DNS Suffix  . :

   Description . . . . . . . . . . . : Bluetooth Device (Personal Area Network)

   Physical Address. . . . . . . . . : C8-E2-65-A7-ER-Z8

   DHCP Enabled. . . . . . . . . . . : Yes

   Autoconfiguration Enabled . . . . : Yes

It is still such a great and handy command. A few new things in there from when I was using it daily (IPv6, WiFi, Bluetooth), but it still looks like I remember.

The first time I had to touch and work on a Linux machine, I felt like I was on a new planet. Everything was different, and it was ALL command line. I’m not ashamed to admit that I was a little intimidated. But then I found the command “ifconfig,” and I began to breathe a little easier. The output didn’t look the same, but the command itself was close. The information it showed was easy enough to read. So, I gained a bit of confidence and knew, “I can do this.”

When I jumped onto the DevNet Expert CWS VM that I’m using for this blog to grab the output of the “ifconfig” command as an example, I was presented with this output.

(main) expert@expert-cws:~$ ifconfig

Command 'ifconfig' not found, but can be installed with:

apt install net-tools

Please ask your administrator.

This brings me to the point of this blog post. The “ifconfig” command is no longer the best command for viewing the network interface configuration in Linux. In fact, it hasn’t been the “best command” for a long time. Today the “ip” command is what we should be using.  I’ve known this for a while, but giving up something that made you feel comfortable and safe is hard. Just ask my 13-year-old son, who still sleeps with “Brown Dog,” the small stuffed puppy I gave him the day he was born. As for me, I resisted learning and moving to the “ip” command for far longer than I should have.

Eventually, I realized that I needed to get with the times. I started using the “ip” command on Linux. You know what, it is a really nice command. The “ip” command is far more powerful than “ifconfig.”

When I found myself thinking about a topic for a blog post, I figured there might be another engineer or two out there who might appreciate a personal introduction to the “ip” command from Hank.

But before we dive in, I can’t leave a cliffhanger like that on the “ifconfig” command.

root@expert-cws:~# apt-get install net-tools

(main) expert@expert-cws:~$ ifconfig

docker0: flags=4099<UP,BROADCAST,MULTICAST>  mtu 1500

        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255

        ether 02:42:9a:0c:8a:ee  txqueuelen 0  (Ethernet)

        RX packets 0  bytes 0 (0.0 B)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 0  bytes 0 (0.0 B)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

ens160: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

        inet 172.16.211.128  netmask 255.255.255.0  broadcast 172.16.211.255

        inet6 fe80::20c:29ff:fe75:9927  prefixlen 64  scopeid 0x20

        ether 00:0c:29:75:99:27  txqueuelen 1000  (Ethernet)

        RX packets 85468  bytes 123667981 (123.6 MB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 27819  bytes 3082651 (3.0 MB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536

        inet 127.0.0.1  netmask 255.0.0.0

        inet6 ::1  prefixlen 128  scopeid 0x10

        loop  txqueuelen 1000  (Local Loopback)

        RX packets 4440  bytes 2104825 (2.1 MB)

        RX errors 0  dropped 0  overruns 0  frame 0

        TX packets 4440  bytes 2104825 (2.1 MB)

        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

There it is, the command that made me feel a little better when I started working with Linux.

Exploring the IP configuration of your Linux host with the “ip” command!

So there you are, a network engineer sitting at the console of a Linux workstation, and you need to explore or change the network configuration. Let’s walk through a bit of “networking 101” with the “ip” command.

First up, let’s see what happens when we just run “ip.”

(main) expert@expert-cws:~$ ip

Usage: ip [ OPTIONS ] OBJECT { COMMAND | help }

       ip [ -force ] -batch filename

where  OBJECT := { link | address | addrlabel | route | rule | neigh | ntable |

                   tunnel | tuntap | maddress | mroute | mrule | monitor | xfrm |

                   netns | l2tp | fou | macsec | tcp_metrics | token | netconf | ila |

                   vrf | sr | nexthop }

       OPTIONS := { -V[ersion] | -s[tatistics] | -d[etails] | -r[esolve] |

                    -h[uman-readable] | -iec | -j[son] | -p[retty] |

                    -f[amily] { inet | inet6 | mpls | bridge | link } |

                    -4 | -6 | -I | -D | -M | -B | -0 |

                    -l[oops] { maximum-addr-flush-attempts } | -br[ief] |

                    -o[neline] | -t[imestamp] | -ts[hort] | -b[atch] [filename] |

                    -rc[vbuf] [size] | -n[etns] name | -N[umeric] | -a[ll] |

                    -c[olor]}

There’s some interesting info just in this help/usage message. It looks like “ip” requires an OBJECT on which a COMMAND is executed. And the possible objects include several that jump out at the network engineer inside of me.

◉ link – I’m curious what “link” means in this context, but it catches my eye for sure

◉ address – This is really promising. The ip “addresses” assigned to a host is high on the list of things I know I’ll want to understand.

◉ route – I wasn’t fully expecting “route” to be listed here if I’m thinking in terms of the “ipconfig” or “ifconfig” command. But the routes configured on a host is something I’ll be interested in.

◉ neigh – Neighbors? What kind of neighbors?

◉ tunnel – Oooo… tunnel interfaces are definitely interesting to see here.

◉ maddress, mroute, mrule – My initial thought when I saw “maddress” was “MAC address,” but then I looked at the next two objects and thought maybe it’s “multicast address.” We’ll leave “multicast” for another blog post.

The other objects in the list are interesting to see. Having “netconf” in the list was a happy surprise for me. But for this blog post, we’ll stick with the basic objects of link, address, route, and neigh.

Where in the network are we? Exploring “ip address”

First up in our exploration will be the “ip address” object. Rather than just go through the full command help or man page line (ensuring no one ever reads another post of mine), I’m going to look at some common things I might want to know about the network configuration on a host. As you are exploring on your own, I would highly recommend exploring “ip address help” as well as “man ip address” for more details.  These commands are very powerful and flexible.

What is my IP address?

(main) expert@expert-cws:~$ ip address show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

    inet 127.0.0.1/8 scope host lo

       valid_lft forever preferred_lft forever

    inet6 ::1/128 scope host 

       valid_lft forever preferred_lft forever

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

    inet 172.16.211.128/24 brd 172.16.211.255 scope global dynamic ens160

       valid_lft 1344sec preferred_lft 1344sec

    inet6 fe80::20c:29ff:fe75:9927/64 scope link 

       valid_lft forever preferred_lft forever

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default 

    link/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff

    inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0

       valid_lft forever preferred_lft forever

Running “ip address show” will display the address configuration for all interfaces on the Linux workstation. My workstation has 3 interfaces configured, a loopback address, the ethernet interface, and docker interface. Some of the Linux hosts I work on have dozens of interfaces, particularly if the host happens to be running lots of Docker containers as each container generates network interfaces. I plan to dive into Docker networking in future blog posts, so we’ll leave the “docker0” interface alone for now.

We can focus our exploration by providing a specific network device name as part of our command.

(main) expert@expert-cws:~$ ip add show dev ens160

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

    inet 172.16.211.128/24 brd 172.16.211.255 scope global dynamic ens160

       valid_lft 1740sec preferred_lft 1740sec

    inet6 fe80::20c:29ff:fe75:9927/64 scope link 

       valid_lft forever preferred_lft forever

Okay, that’s really what I was interested in looking at when I wanted to know what my IP address was. But there is a lot more info in that output than just the IP address. For a long time, I just skimmed over the output. I would ignore most output and simply look at the address and for state info like “UP” or “DOWN.” Eventually, I wanted to know what all that output meant, so in case you’re interested in how to decode the output above…

  • Physical interface details
    • “ens160” – The name of the interface from the operating system’s perspective.  This depends a lot on the specific distribution of Linux you are running, whether it is a virtual or physical machine, and the type of interface.  If you’re more used to seeing “eth0” interface names (like I was) it is time to become comfortable with the new interface naming scheme.
    • “<BROADCAST,MULTICAST,UP,LOWER_UP>” – Between the angle brackets are a series of flags that provide details about the interface state.  This shows that my interface is both broadcast and multicast capable and that the interface is enabled (UP) and that the physical layer is connected (LOWER_UP)
    • “mtu 1500” – The maximum transmission unit (MTU) for the interface.  This interface is configured for the default 1500 bytes
    • “qdisc mq” – This indicates the queueing approach being used by the interface.  Things to look for here are values of “noqueue” (send immediately) or “noop” (drop all). There are several other options for queuing a system might be running.
    • “state UP”- Another indication of the operational state of an interface.  “UP” and “DOWN” are pretty clear, but you might also see “UNKNOWN” like in the loopback interface above.  “UNKNOWN” indicates that the interface is up and operational, but nothing is connected.  Which is pretty valid for a loopback address.
    • “group default” – Interfaces can be grouped together on Linux to allow common attributes or commands.  Having all interfaces connected to “group default” is the most common setup, but there are some handy things you can do if you group interfaces together.  For example, imagine a VM host system with 2 interfaces for management and 8 for data traffic.  You could group them into “mgmt” and “data” groups and then control all interfaces of a type together.
    • “qlen 1000” – The interface has a 1000 packet queue.  The 1001st packet would be dropped.
  • “link/ether” – The layer 2 address (MAC address) of the interface
  • “inet” – The IPv4 interface configuration
    • “scope global” – This address is globally reachable. Other options include link and host
    • “dynamic” – This IP address was assigned by DHCP.  The lease length is listed in the next line under “valid_lft”
    • “ens160” – A reference back to the interface this IP address is associated with
  • “inet6” – The IPv6 interface configuration.  Only the link local address is configured on the host.  This shows that while IPv6 is enabled, the network doesn’t look to have it configured more widely

Network engineers link the world together one device at a time. Exploring the “ip link” command.

Now that we’ve gotten our feet wet, let’s circle back to the “link” object. The output of “ip address show” command gave a bit of a hint at what “link” is referring to. “Links” are the network devices configured on a host, and the “ip link” command provides engineers options for exploring and managing these devices.

What networking interfaces are configured on my host?

(main) expert@expert-cws:~$ ip link show

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000

    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

3: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default 

    link/ether 02:42:9a:0c:8a:ee brd ff:ff:ff:ff:ff:ff

After exploring the output of “ip address show,” it shouldn’t come as a surprise that there are 3 network interfaces/devices configured on my host.  And a quick look will show the output from this command is all included in the output for “ip address show.”  For this reason, I almost always just use “ip address show” when looking to explore the network state of a host.

However, the “ip link” object is quite useful when you are looking to configure new interfaces on a host or change the configuration on an existing interface. For example, “ip link set” can change the MTU on an interface.

root@expert-cws:~# ip link set ens160 mtu 9000

root@expert-cws:~# ip link show dev ens160

2: ens160: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode DEFAULT group default qlen 1000

    link/ether 00:0c:29:75:99:27 brd ff:ff:ff:ff:ff:ff

Note 1: Changing network configuration settings requires administrative or “root” privileges.

Note 2: The changes made using the “set” command on an object are typically NOT maintained across system or service restarts. This is the equivalent of changing the “running-configuration” of a network device. In order to change the “startup-configuration” you need to edit the network configuration files for the Linux host.  Check the details for network configuration for your distribution of Linux (ie Ubuntu, RedHat, Debian, Raspbian, etc.)

Is anyone else out there? Exploring the “ip neigh” command

Networks are most useful when other devices are connected and reachable through the network. The “ip neigh” command gives engineers a view at the other hosts connected to the same network. Specifically, it offers a look at, and control of, the ARP table for the host.

Do I have an ARP entry for the host that I’m having trouble connecting to?

A common problem network engineers are called on to support is when one host can’t talk to another host.  If I had a nickel for every help desk ticket I’ve worked on like this one, I’d have an awful lot of nickels. Suppose my attempts to ping a host on my same local network with IP address 172.16.211.30 are failing. The first step I might take would be to see if I’ve been able to learn an ARP entry for this host.

(main) expert@expert-cws:~$ ping 172.16.211.30

PING 172.16.211.30 (172.16.211.30) 56(84) bytes of data.

^C

--- 172.16.211.30 ping statistics ---

3 packets transmitted, 0 received, 100% packet loss, time 2039ms

(main) expert@expert-cws:~$ ip neigh show

172.16.211.30 dev ens160  FAILED

172.16.211.254 dev ens160 lladdr 00:50:56:f0:11:04 STALE

172.16.211.2 dev ens160 lladdr 00:50:56:e1:f7:8a STALE

172.16.211.1 dev ens160 lladdr 8a:66:5a:b5:3f:65 REACHABLE

And the answer is no. The attempt to ARP for 172.16.211.30 “FAILED.”  However, I can see that ARP in general is working on my network, as I have other “REACHABLE” addresses in the table.

Another common use of the “ip neigh” command involves clearing out an ARP entry after changing the IP address configuration of another host (or hosts). For example, if you replace the router on a network, a host won’t be able to communicate with it until the old ARP entry ages out and the system tries ARPing again for a new address. Depending on the operating system, this can take minutes — which can feel like years when waiting for a system to start responding again. The “ip neigh flush” command can clear an entry from the table immediately.

How do I get from here to there? Exploring the “ip route” command

Most of the traffic from a host is destined somewhere on another layer 3 network, and the host needs to know how to “route” that traffic correctly. After looking at the IP address(es) configured on a host, I will often take a look at the routing table to see if it looks like I’d expect. For that, the “ip route” command is the first place I look.

What routes does this host have configured?

(main) expert@expert-cws:~$ ip route show

default via 172.16.211.2 dev ens160 proto dhcp src 172.16.211.128 metric 100 

10.233.44.0/23 via 172.16.211.130 dev ens160 

172.16.211.0/24 dev ens160 proto kernel scope link src 172.16.211.128 

172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown 

It may not look exactly like the output of “show ip route” on a router, but this command provides very usable output.

◉ My default gateway is 172.16.211.2 through the “ens160” device.  This route was learned from DHCP and will use the IP address configured on my “ens160” interface.

◉ There is a static route configured to network 10.233.44.0/23 through address 172.16.211.130

◉ And there are 2 routes that were added by the kernel for the local network of the two configured IP addresses on the interfaces.  But the “docker0” route shows “linkdown” — matching the state of the “docker0” interface we saw earlier.

The “ip route” command can also be used to add or delete routes from the table, but with the same notes as when we used “ip link” to change the MTU of an interface. You’ll need admin rights to run the command, and any changes made will not be maintained after a restart. But this can still be very handy when troubleshooting or working in the lab.

And done… or am I?

So that’s is my “brief” look at the “ip” command for Linux. Oh wait, that bad pun attempt reminded me of one more tip I meant to include. There is a “–brief” option you can add to any of the commands that reformats the data in a nice table that is often quite handy. Here are a few examples.

(main) expert@expert-cws:~$ ip --brief address show

lo               UNKNOWN        127.0.0.1/8 ::1/128 

ens160           UP             172.16.211.128/24 fe80::20c:29ff:fe75:9927/64 

docker0          DOWN           172.17.0.1/16 

(main) expert@expert-cws:~$ ip --brief link show

lo               UNKNOWN        00:00:00:00:00:00 <LOOPBACK,UP,LOWER_UP> 

ens160           UP             00:0c:29:75:99:27 <BROADCAST,MULTICAST,UP,LOWER_UP> 

docker0          DOWN           02:42:9a:0c:8a:ee <NO-CARRIER,BROADCAST,MULTICAST,UP> 

Not all commands have a “brief” output version, but several do, and they are worth checking out.

There is quite a bit more I could go into on how you can use the “ip” command as part of your Linux network administration skillset. (Checkout the “–json” flag for another great option). But at 3,000+ words on this post, I’m going to call it done for today. If you’re interested in a deeper look at Linux networking skills like this, let me know, and I’ll come back for some follow-ups. 

Source: cisco.com

Sunday, 31 July 2022

500-560 OCSE Exam Questions Bank | Study Guide | On-Premise and Cloud Solutions

Cisco 500-560 OCSE Exam Description:

This exam tests a candidate's knowledge of the skills needed by an engineer to understand the necessary information to support the express specialization networking business customer. This exam covers Switching, Routing, Wireless, Cloud and Security solutions for engagements with smaller business customers.

Cisco 500-560 Exam Overview:

Cisco 500-560 Exam Topics:

  1. Switching Overview and Features- 15%
  2. Routing Overview and Features- 15%
  3. Wireless Overview and Features- 25%
  4. Meraki Overview and Products- 35%
  5. Security Overview and Features- 10%
Must Read:-


Cisco 500-560 OCSE Exam Preparation – Step By Step Guide

More than a VPN: Announcing Cisco Secure Client (formerly AnyConnect)

Cisco Secure Client, Cisco Security, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep

We’re excited to announce Cisco Secure Client, formerly AnyConnect, as the new version of one of the most widely deployed security agents. As the unified security agent for Cisco Secure, it addresses common operational use cases applicable to Cisco Secure endpoint agents. Those who install Secure Client’s next-generation software will benefit from a shared user interface for tighter and simplified management of Cisco agents for endpoint security.

Cisco Secure Client, Cisco Security, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep

Go Beyond Traditional Secure Access


Swift Endpoint Detection & Response and Improved Remote Access

Now, with Secure Client, you gain improved secure remote access, a suite of modular security services, and a path for enabling Zero Trust Network Access (ZTNA) across the distributed network. The newest capability is in Secure Endpoint as a new module within the unified endpoint agent framework. Now you can harness Endpoint Detection & Response (EDR) from within Secure Client. You no longer need to deploy and manage Secure Client and Secure Endpoint as separate agents, making management more effortless on the backend.

Increased Visibility and Simplified Endpoint Security Agents

Within Device Insights, Secure Client lets you deploy, update, and manage your agents from a new cloud management system inside SecureX. If you choose to use cloud management, Secure Client policy and deployment configuration are done in the Insights section of Cisco SecureX. Powerful visibility capabilities in SecureX Device Insights show which endpoints have Secure Client installed in addition to what module versions and profiles they are using.

Cisco Secure Client, Cisco Security, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep

The emphasis on interoperability of endpoint security agents helps provide the much-needed visibility and simplification across multiple Cisco security solutions while simultaneously reducing the complexity of managing multiple endpoints and agents. Application and data visibility is one of the top ways Secure Client can be an important part of an effective security resilience strategy.

Cisco Secure Client, Cisco Security, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep

Source: cisco.com

Thursday, 28 July 2022

Your Network, Your Way: A Journey to Full Cloud Management of Cisco Catalyst Products

At Cisco Live 2022 in Las Vegas, Nevada (June 12-16), there were many announcements about our newest innovations to power the new era of hybrid workspace, distributed network environments and the customers journey to the cloud. Among the revelations was our strategy to accelerate our customers transition to a cloud-managed networking experience.

Our customers asked, and we answered: Cisco announced that Catalyst customers can choose the operational model that best fits their needs: Cloud Management/Monitoring through the Meraki Dashboard or On-Prem/Public/Private Cloud with Cisco DNA Center.

Cisco Exam, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News
Figure 1: Bringing together the best of both worlds

Note: This article heavily references the following terms:

DNA Mode and Meraki Mode for Catalyst: DNA Mode is a Catalyst device using a DNA license with DNA features and Meraki Mode is a Catalyst device using a Meraki license with Meraki features.

◉ Monitor and Manage: Cloud Monitoring allows Catalyst devices to have visibility and troubleshooting tools via the Meraki dashboard, while Cloud Management for Catalyst means complete feature parity with Meraki solutions.

So WHY THIS and WHY NOW?


Our Catalyst technology remains the most powerful campus and branch networking platform and fastest growing product on the market. Also, Meraki dashboard continues to be the simplest cloud management platform, with the highest adoption and deployment on the market. How can we bring things together and give our customers the best of both worlds? Enter Cloud Management and Monitoring for Catalyst. Simplicity without compromising.

And HOW to get started?


Today we have an on-premises management offering through Cisco DNA Center, which is a do-it-yourself high-touch approach. There are now two ways to implement this: in addition to existing Cisco DNA Center physical appliances that come in multiple sizes and flavors, we announced at Cisco Live the Cisco DNA Center Virtual Appliance, which runs as VMware ESXi instances in private data centers or as a virtual machine in public cloud platforms starting with AWS.

We also have Cisco Meraki Cloud Management which provides low touch, and simplicity as Meraki’s slogan’s: Simplicity at Meraki stands for everything from how we approach product development to user experience.

Executing a Cloud Ready Strategy


Cloud Management: Common Hardware Platforms

Cisco Exam, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News
Figure 2: Delivering the Next Generation of Networking

On the wired network side, Cisco is focusing on our fixed switching portfolio in the Cisco Catalyst 9000 series switches. We announced that starting with the Cisco Catalyst 9300 series switches they will be common hardware and operate in either DNA or Meraki mode. A Cisco Catalyst 9300 switch can be migrated from DNA Mode to Meraki Mode and fully managed by the Meraki Dashboard. While the Meraki mode of the Catalyst 9300 can be migrated back to the DNA Mode, the Meraki MS390 cannot be migrated to a DNA mode of operation.

On the wireless network side, we also announced the first common hardware Access Points, the new Cisco Catalyst 916x Series Wi-Fi 6E Access Points. Those Access Points are built with dual modes: they are capable of booting in either Meraki or DNA modes. That means a Catalyst 916x Access Point can appear on the network as either a Meraki device or a Cisco DNA device, with all the associated monitoring and management capabilities inherent in each platform. The demo goes into detail.

Cloud Migration Details

◉ Cisco IOS-XE 17.8.1 version (or later) is required for the Cisco Catalyst 9300 switch to be migrated to Meraki Mode and managed by the Meraki Dashboard.

◉ The catalyst switch or access point when put in the Meraki mode of operations, their features align with what is available in the Meraki Dashboard. For example, the Cisco Catalyst 9300 switch in Meraki Mode is aligned with the switching features available for the Cisco Meraki MS390.

◉ You can migrate a standalone or a stack of Cisco Catalyst 9300 switches to Meraki Mode.

◉ Currently, you cannot stack the migrated Cisco Catalyst 9300 with Cisco Meraki MS390.

◉ Like native Meraki devices, once a Catalyst switch or AP is in Meraki Mode, the CLI access is 
unavailable.

◉ Managed devices display their software version as Meraki MS, just like native Meraki devices.

◉ Current supported switching platforms are Cisco Catalyst C9300-24T, C9300-48T, C9300-24P, C9300-48P, C9300-24U, C9300-48U, C9300-24UX, C9300-48UXM, C9300-48UN.

◉ Currently supported modules are C9300-NM-8X, C9300-NM-2Q, C3850-NM-4X.

◉ Current supported Cisco Catalyst Access Points are the Wi-Fi 6E CW APs (9162, 9164 and 9166).

Cisco Exam, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News
Figure 3: The Migration Process from Cisco Catalyst 9300 DNA Mode to Meraki Mode

Cloud Monitoring: Existing Cisco Catalyst 9000 fixed switches 

Starting with IOS-XE 17.3.4, Cisco Catalyst 9200, 9300 and 9500 series switches in DNA mode with a valid DNA license (Essentials or Advantage) can be added to the Meraki dashboard for monitoring and troubleshooting, providing a single pane of glass and centralized network monitoring, network device visibility, usage, topology. The Meraki dashboard also allows the ability to see alerts, port information and use of diagnostic tools, all in one place.

Cisco Exam, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News
Figure 4: Cloud Monitoring for Catalyst

Cloud Monitoring Details

◉ Catalyst Switches in DNA mode and with a valid DNA license (single or in a stack) can be monitored via the Meraki dashboard.

◉ Once claimed in the Meraki Dashboard, the switches will be automatically tagged with “Monitor Only” in the dashboard to distinguish from fully managed Meraki switches. Aside from this difference, “Monitor Only” Catalyst switches have visibility similarly to Meraki MS switches in the dashboard, including a visual representation of connected ports and traffic information.

◉ The Meraki Dashboard displays two serial numbers in the inventory of each catalyst device. Similar to migrated Catalyst switches, all switches in monitor mode keep a Catalyst Serial Number and generate a Meraki serial number which both appear in the dashboard to help identify switches.

◉ Monitor-only devices display their software version as IOS-XE. The device is still in DNA Mode which means that the CLI is still enabled, and other DNA features are available.

◉ For monitor-only devices, other management tools can still be used to make changes to devices such as Ansible, CLI, GUI, etc.

◉ Current supported switching platforms are Cisco Catalyst 9200, 9300 and 9500 series. Other platforms are under consideration.

◉ The process to onboard Cisco Catalyst switches for monitoring is done through a guided process using the Meraki onboarding app for Mac, Windows or Linux.

Cisco Exam, Cisco Exam Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News
Figure 5. Cloud Monitoring Capabilities

License Flexibility


Our Licensing Team has been working hard to ensure a smooth transition between Modes (DNA and Meraki) from the licensing perspective.

For the common hardware perspective, to migrate the Cisco Catalyst 9300 switch to a Meraki mode, a valid DNA license is required. You can choose between Meraki Enterprise or Advanced license depending upon enabled features during license renewal.

The Cisco Catalyst 916x series APs can be purchased with the appropriate licenses based on the management platform: DNA license for Cisco DNA Center or Meraki license for Meraki mode.

On the visibility/monitoring front: A valid DNA Essentials (for switch visibility) or Advantage license (client visibility) is required to be onboarded into the Meraki dashboard. The device can be managed by other tools such as Cisco Prime, CLI or 3rd party tools.

Customer Use Cases


Cloud Monitoring

◉ Catalyst customers not using Cisco DNA Center as the operational platform: You will be able to gain immediate value with cloud monitoring, providing a view of your network from anywhere, anytime, giving them a low-effort way to experience Meraki Cloud Dashboard.
◉ Customers who are running a hybrid network of Meraki and Catalyst: Benefit by moving their Catalyst hardware into view on the Meraki dashboard with monitoring.

Cloud Management

◉ Customers with network refresh network: Customers who already have Meraki platforms; upon refresh, they can choose to adopt Catalyst into their existing infrastructure (APs and switches)

◉ Current Cisco Catalyst 9300 customers looking to move to cloud operations and the features available in the Meraki Dashboard satisfy their use cases.

Cisco DNA Center Physical/Virtual Appliance

◉ Customers using DNA features with Air gapped or Compliance requirements

◉ Customers using DNA features and require a Public or Private Cloud deployment

◉ Customers with requirements for on-premise management platform

Why this is important?


The benefits are endless

Customers now have the operational flexibility to choose either Meraki dashboard or Cisco DNA Center for the Cisco Catalyst family, providing extensive monitoring and management capabilities while enabling the choice as to where the services are running—on-premises or in the cloud—depending on operational needs, geography, and regional data regulations.

For example, financial organizations that require air-gap protection from internet traffic can utilize an on-premises Cisco DNA Center appliance while a distributed organization that needs to support high-speed Wi-Fi access at retail outlets, branch offices, or emergency popup sites, can deploy the new Cisco Catalyst Wi-Fi 6E Access Points and manage them from the cloud-first Meraki dashboard to simplify remote operations.

Source: cisco.com

Tuesday, 26 July 2022

Perspectives on the Future of Service Provider Networking: Distributed Data Centers and Edge Services

SP360: Service Provider, Cisco Career, Cisco Tutorial and Material, Cisco Careers, Cisco Jobs, Cisco Learning, Cisco Prep, Cisco Skills, Cisco Guides

The ongoing global pandemic, now approaching its third year, has profoundly illustrated the critical role of the internet in society, changing the way we work, live, play, and learn. This role will continue to expand as digital transformation becomes even more pervasive. However, connecting more users, devices, applications, content, and data with one another is only one dimension to this expansion.

Another is the new and emerging types of digital experiences such as cloud gaming, augmented reality/virtual reality (AR/VR), telesurgery using robotic assistance, autonomous vehicles, intelligent kiosks, and Internet of Things (IoT)-based smart cities/communities/homes. These emerging digital experiences are more interactive, bandwidth-hungry, latency-sensitive, and they generate massive amounts of data useful for valuable analytics. Hence, the performance of public and private networks will be progressively important for delivering superior digital experiences.

Network performance, however, is increasingly dependent on the complex internet topology that’s evolving from a network of networks to a network of data centers. Data centers are generally where applications, content, and data are hosted as workloads using compute, storage, and networking infrastructure. Data centers may be deployed on private premises, at colocation facilities, in the public cloud, or in a virtual private cloud and each may connect to the public internet, a private network, or both. Regardless, service providers, including but not limited to communication service providers (CSPs) that provide network connectivity services, carrier neutral providers that offer colocation/data center services, cloud providers that deliver cloud services, content providers that supply content distribution services, and software-as-a-service (SaaS) application providers all play a vital role in both digital experiences and network performance. However, each service provider can only control the performance of its own network and associated on-net infrastructure and not anything outside of its network infrastructure (i.e., off-net). For this reason, cloud providers offer dedicated network interconnects so their customers can bypass the internet and receive superior network performance for cloud services.

New and emerging digital experiences depend on proximity

In the past, service providers commonly deployed a relatively small number of large data centers and network interconnects at centralized locations. In other words, that’s one large-scale data center (with optional redundant infrastructure) per geographic region where all applicable traffic within the region would backhaul to. New and emerging digital experiences, however, as referenced above, are stressing these centralized data center and interconnect architectures given their much tighter performance requirements. At the most fundamental level, the speed of light determines how quickly traffic can traverse a network while computational power defines how fast applications and associated data can be processed. Therefore, proximity of data center workloads to users and devices where the data is generated and/or consumed is a gating factor for high quality service delivery of these emerging digital experiences.

Consider the following:

◉ High bandwidth video content such as high-definition video on demand, streaming video, and cloud-based gaming. Caching such content closer to the user not only improves network efficiency (i.e., less backhaul), but it also provides a superior digital experience given lower network latency and higher bandwidth transfer rates.

◉ Emerging AR/VR applications represent new revenue opportunities for service providers and the industry. However, they depend on ultra-low network latency and must be hosted close to the users and devices.

◉ Private 5G services including massive IoT also represent a significant new revenue opportunity for CSPs. Given the massive logical network scale and massive volume of sensor data anticipated, data center workload proximity will be required to deliver ultra-reliable low-latency communications (URLCC) and massive machine-type communications (mMTC) services as well as host 5G user plane functions so that local devices can communicate directly with one another at low latency and using high bandwidth transfer rates. Proximity also improves network efficiency by reducing backhaul traffic. That is, proximity enables the bulk of sensor data to be processed locally while only the sensor data that may be needed later is backhauled.

◉ 5G coordinated multipoint technologies can also provide advanced radio service performance in 5G and LTE-A deployments. This requires radio control functions to be deployed in proximity to the remote radio heads.

◉ Developing data localization and data residency laws are another potential driver for data center proximity to ensure user data remains in the applicable home country.

These are just a few examples that illustrate the increasing importance of proximity between applications, content, and data hosted in data centers with users/devices. They also illustrate how the delivery of new and emerging digital experiences will be dependent on the highest levels of network performance. Therefore, to satisfy these emerging network requirements and deliver superior digital experiences to customers, service providers should transform their data center and interconnect architectures from a centralized model to a highly distributed model (i.e., edge compute/edge cloud) where data center infrastructure and interconnects are deployed at all layers of the service provider network (e.g., local access, regional, national, global) and with close proximity to users/devices where the data is generated and/or consumed.

This transformation should also include the ubiquitous use of a programmable network that allows the service provider to intelligently place workloads across its distributed data center infrastructure as well as intelligently route traffic based upon service/application needs (e.g., to/from the optimal data center), a technique we refer to as intent-based networking. Further, in addition to being highly distributed, edge data centers should be heterogeneous and not one specific form factor. Rather, different categories of edge data centers should exist and be optimized for different types of services and use cases.

Four categories of edge data centers

Cisco, for example, identifies four main categories of edge data centers for edge compute services:

1. Secure access service edge (SASE) for hosting distributed workloads related to connecting and securing users and devices. For example, secure gateways, DNS, cloud firewalls, VPN, data loss prevention, Zero Trust, cloud access security broker, cloud onramp, SD-WAN, etc.

2. Application edge for hosting distributed workloads related to protecting and accelerating applications and data. For example, runtime application self-protection, web application firewalls, BoT detection, caching, content optimization, load balancing, etc.

3. Enterprise edge for hosting distributed workloads related to infrastructure platforms optimized for distributed applications and data. For example, voice/video, data center as a service (DCaaS), industrial IoT, consumer IoT, AI/ML, AR/VR, etc.

4. Carrier edge for hosting distributed workloads related to CSP edge assets (e.g., O-RAN) and services including connected cars, private LTE, 5G, localization, content and media delivery, enterprise services, etc.

Of course, applicability of these different categories of edge compute services will vary per service provider based on the specific types of services and use cases each intends to offer. Carriers/CSPs, for example, are in a unique position because they own the physical edge of the network and are on the path between the clouds, colocation/data centers, and users/devices. Of course, cloud providers and content providers are also in a unique position to bring high performance edge compute and storage closer to users/devices whether via expanding their locations and/or hosting directly on the customer’s premises. Similarly, carrier neutral providers (e.g., co-location/data centers) are also in a unique position given their dense interconnection of CSPs, cloud providers, content providers, and SaaS application providers.

SP360: Service Provider, Cisco Career, Cisco Tutorial and Material, Cisco Careers, Cisco Jobs, Cisco Learning, Cisco Prep, Cisco Skills, Cisco Guides
Figure 1.  Distributed data centers and edge services

Benefits of distributed data centers and edge services


Service providers that deploy a highly distributed data center and interconnect architecture will benefit from:

◉ Lower network latency and higher bandwidth transfer rates resulting from edge compute proximity.

◉ Flexible and intelligent placement of edge compute workloads based on service/traffic demands.

◉ Increased network efficiencies including reduced traffic backhaul.

◉ Distributed applications/workloads which tend to be more efficient, scalable, secure, and available.

◉ Digital differentiation including superior delivery of new and emerging digital experiences.

◉ New revenue/monetization opportunities associated with the new and emerging digital experiences.

Some CSPs are already actively moving in this direction on their own or in partnership with cloud and content providers. Service providers that haven’t started their transformation toward a highly distributed edge data center and interconnect architecture need to be aware that competitors intend to fill the void. To deliver superior network performance for the emerging digital experiences, service providers should start this transformation now.

Source: cisco.com