Saturday, 17 April 2021

100-490 RSTECH Free Exam Questions & Answers | CCT Routing and Switching Exam Syllabus


Cisco RSTECH Exam Description:

The Supporting Cisco Routing and Switching Network Devices v3.0 (RSTECH 100-490) is a 90-minute, 60-70 question exam associated with Cisco Certified Technician Routing and Switching certification. The course, Supporting Cisco Routing and Switching Network Devices v3.0, helps candidates prepare for this exam.

Cisco 100-490 Exam Overview:

Your workforce is ready – but is your workplace?

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material

We’re heading back to the office!!

It won’t happen overnight – but the signs are increasingly positive that we are on our way back. Some companies, like Cisco and Google, have begun encouraging a phased return to the office, once the situation permits. Personally, I can’t wait to be in the same physical space as my colleagues, as well as meeting our customers face-to-face.

Like most of you, I desire the flexibility to choose where I work. Based on our recent global workforce survey, only 9% expect to be in the office 100% of the time. That means IT will need to deliver a consistently secure and engaging experience across both in-office and remote work environments.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 1. The future of work is hybrid

Networking teams need to prepare for the hybrid workplace.


Some people tend to be more prepared than others. Personally, I like to err on the side of being over-prepared. Most network professionals I know are of a similar mindset. So, what does that mean when it comes to the return to the office?

◉ Employee concerns: From our global workplace survey we learned that 95% of workers are uncomfortable about that return due to fears of contracting COVID-19. Leading the concerns at 64% is not wanting to touch shared office devices, closely followed by concerns over riding in a crowded elevator (62%), and sharing a desk (61%).

◉ Business concerns: While businesses need to provide safe and secure work environments, requiring new efforts and solutions, they must also try to mitigate costs and capture savings. One primary approach is by using office space more efficiently. Our survey results show that 53% are already looking at options to reduce office space, while 96% indicate they need intelligent workplace technology to improve work environments

Where to start your return to the office


So, what’s on your mind as we head back to the office? According to IDC, the biggest permanent networking changes you are making resulting from COVID are the integration of networking and security management (32%); improved support for remote workers (30%); and improved network automation, visibility, and analytics (28%). So how can you address these priorities as we head back to the office?


If your car had been sitting unused in the driveway for a year, the first thing you’d do is get it serviced. Likewise, your campus and branch networks need to be put through their paces. Utilization is minimal right now, so it is a great time to see what improvements can be made.

◉ Reimagine Connections: Digital business begins with connectivity, so you can’t take it for granted. You can start by making sure your wired and wireless network can support an imminent return to work. With hybrid work, everyone will have video to the desktop. Will your network performance deliver the experience users love? And make sure it is set up to enhance your employees’ safety and work experience with social density and proximity monitoring, workspace efficiency, and smart building IoT requirements.

◉ Reinforce Security: This is the time to automate security policy management, micro-segmentation, and zero-trust access so that any device or user is automatically authenticated and authorized access only to those resources it’s approved for.

◉ Redefine IT Experience: Make it easy on your team and your business. With automation and AI-enabled analytics technologies, AIOps is now a reality for network operations too. All the tools are available to achieve pre-emptive troubleshooting and remediation from “user/device-to-application” – wherever either is located.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 2. Choose your access networking journey

Here are some valuable tips from our Cisco networking team that are making the necessary preparations for our own Cisco campuses.


According to our survey, 58% of workers will continue to work from home at least 8 days a month. That means that you need to continue investing efforts in optimizing the experience for those workers. Many of them are still complaining that their work experience is not optimal. According to IDC, 55% of work from home users complain of multiple problems a week, while 50% complain of problems with audio on video conferences.

◉ Work from Home: Deliver plug-and-play provisioning and policy automation that allows your remote employees to easily and securely connect to the corporate network without setting up a VPN.

◉ Home Office: For those that want to turn their home network into a “branch of one” you can create a zero-trust fabric with end-to-end segmentation and always-on network connectivity that provides an enhanced multicloud application experience.


The evolution to a hybrid workforce, together with the accelerated move to the cloud and edge applications, has led to the perfect storm that demands a new approach for IT to deliver a secure user experience regardless of where users and applications are located. This new approach is being offered by a combination of SD-WAN and cloud security technologies that have been termed Secure Access Service Edge or SASE. It’s estimated that 40% of enterprises will have explicit strategies to adopt SASE by 2024

◉ SD-WAN: Out of the multiple ways to get started with adopting a full SASE architecture – I would propose that SD-WAN is a wise choice. It offers a secure, mature and efficient way to access both SaaS and IaaS environments, with multiple deployment and security options.

◉ SASE: Evolving to a full SASE architecture combines networking and security functions in the cloud to deliver secure access to applications, anywhere users work. Combining with SD-WAN, this includes security service, including firewall as a service, secure web gateway (SWG)s, cloud access security broker (CASB), and zero trust network access (ZTNA).

Source: cisco.com    

Friday, 16 April 2021

Comparing Lower Layer Splits for Open Fronthaul Deployments

Introduction

The transition to open RAN (Radio Access Network) based on interoperable lower layer splits is gaining significant momentum across the mobile industry. However, where best to split the open RAN is a complex compromise between radio unit (RU) simplification, support of advanced co-ordinated multipoint RF capabilities, consequential requirements on the fronthaul transport, including limitations on transport delay budgets as well as bandwidth expansion. To help in comparing alternative options, different splits have been assigned numbers with higher numbers representing splits “lower down” in the protocol stack, meaning less functionality being deployed “below” the split in the RU. Lower layer splits occur below the medium access control (MAC) layer in the protocol stack, with options including Split 6 – between the MAC and physical (PHY) layers, Split 7 – within the physical layer, and Split 8 – between the physical layer and the RF functionality.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 1: Different Lower Layer Splits in the RAN Protocol Stack

This paper compares the two alternatives for realizing the lower layer split, the network functional application platform interface (nFAPI) Split 6 as defined by the Small Cell Forum (SCF) and the Split 7-2x as defined by the O-RAN Alliance.

Small Cell Splits


The Small Cell Forum took the initial lead in defining a multivendor lower layer split, taking its FAPI platform application programming interface (API) that had been used as an informative split of functionality between small cell silicon providers and the small cell RAN protocol stack providers, and enabling this to be “networked” over an IP transport. This “networked” FAPI, or nFAPI, enables the Physical Network Function (PNF) implementing the small cell RF and physical layer to be remotely located from the Virtual Network Function (VNF) implementing the small cell MAC layer and upper layer RAN protocols. First published by the SCF in 2016, the specification of the MAC/PHY split has since been labelled as “Split 6” by 3GPP TR38.801 that studied 5G’s New Radio access technology and architectures.

The initial SCF nFAPI program delivered important capabilities that enabled small cells to be virtualized, compared with the conventional macro-approach that at the time advocated using the Common Public Radio Interface (CPRI) defined split. CPRI had earlier specified an interface between a Radio Equipment Control (REC) element implementing the RAN baseband functions and a Radio Equipment (RE) element implementing the RF functions, to enable the RE to be located at the top of a cell tower and the REC to be located at the base of the cell tower. This interface was subsequently repurposed to support relocation of the REC to a centralized location that could serve multiple cell towers via a fronthaul transport network.

Importantly, when comparing the transport bandwidth requirements for the fronthaul interface, nFAPI/Split 6 does not significantly expand the bandwidth required compared to more conventional small cell backhaul deployments. Moreover, just like the backhaul traffic, the nFAPI transport bandwidth is able to vary according to served traffic, enabling statistical multiplexing to be used over the fronthaul IP network. This can be contrasted with the alternative CPRI split, also referred to as “Split 8” in TR38.801, that requires bandwidth expansion up to 30-fold and a constant bit rate connection, even if there is no traffic being served in a cell.

HARQ Latency Constraints


Whereas nFAPI/Split 6 offers significant benefits over CPRI/Split 8 in terms of bandwidth expansion, both splits are below the hybrid automatic repeat request (HARQ) functionality in the MAC layer that is responsible for constraining the transport delay budget for LTE fronthaul solutions. Both LTE-based Split 6 and Split 8 have a common delay constraint equivalent to 3 milliseconds between when up-link data is received at the radio to the time when the corresponding down-link ACK/NAK needs to be ready to be transmitted at the radio. These 3 milliseconds need to be allocated to HARQ processing and transport, with a common assumption being that 2.5 milliseconds are allocated to processing, leaving 0.5 milliseconds allocated to round trip transport. This results in the oft-quoted delay requirement of 0.25 milliseconds for one way transport delay budget between the radio and the element implementing the MAC layer’s up-link HARQ functionality.

The Small Cell Forum acknowledges such limitations when using its nFAPI/Split 6. Because the 0.25 milliseconds round trip transport budget may severely constrain nFAPI deployments, SCF defines the use of HARQ interleaving that uses standardized signaling to defer HARQ buffer emptying, enabling higher latency fronthaul links to be accommodated. Although HARQ interleaving buys additional transport delay budget, the operation has a severe impact on single UE throughput; as soon as the delay budget exceeds the constraint described above, the per UE maximum throughput is immediately decreased by 50%, with further decreases as delays in the transport network increase.

Importantly, 5G New Radio does not implement the same synchronous up-link HARQ procedures and therefore does not suffer the same transport delay constraints. Instead, the limiting factor constraining the transport budget in 5G fronthaul systems is the operation of the windowing during the random access procedure. Depending on the operation of other vendor specific control loops, e.g., associated with channel estimation, this may enable increased fronthaul delay budgets to be used in 5G deployments.

O-RAN Alliance


The O-RAN Alliance published its “7-2x” Split 7 specification in February 2019. All Split 7 alternatives offer significant benefits over the legacy CPRI/Split 8, avoiding Split 8 requirements to scale fronthaul bandwidth on a per antenna basis, resulting in significant lower fronthaul transport bandwidth requirements, as well introducing transport bandwidth requirements that vary with served traffic in the cell. Moreover, when compared to Split 6, the O-RAN lower layer Split 7-2x supports all advanced RF combining techniques, including the higher order multiple-input, multiple-output (MIMO) capability that is viewed as a key enabling technology for 5G deployments, as shown in Table 1, that can be used to contrast Split 6 “MAC/PHY” with Split 7 “Split PHY” based architectures.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 1: Comparing Advanced RF Combining Capabilities of Lower Layer Splits

However, instead of supporting individual transport channels over the nFAPI interface,  Split 7-2x defines the transport of frequency domain IQ defined spatial streams or MIMO layers across the lower layer fronthaul interface. The use of frequency domain IQ symbols can lead to a significant increase in fronthaul bandwidth when compared to the original transport channels. Figure 2 illustrates the bandwidth expansion due to Split 7-2 occurring “below” the modulation function, where the original 4 bits to be transmitted are expanded to over 18 bits after 16-QAM modulation, even when using a block floating point compression scheme.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career

Figure 2: Bandwidth Expansion with Block Floating Point Compressed Split 7-2x

The bandwidth expansion is a function of the modulation scheme, with higher expansion required for lower order modulation, as shown in Table 2.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 2: Bandwidth Expansion for Split 7-2x with Block Floating Point Compression compared to Split 7-3

Such a bandwidth expansion was one of the reasons that proponents of the so called Split 7-3 advocated a split that occurred “above” the modulation/demodulation function. In order to address such issues, and the possible fragmentation of different Split 7 solutions, the O-RAN Alliance lower layer split includes the definition of a technique termed modulation compression. The operation of modulation compression of a 16-QAM modulated waveform is illustrated in Figure 3. The conventional Split 7-2 modulated constellation diagram is shifted to enable the modulation points to lie on a grid that then allows the I and Q components to be represented in binary instead of floating point numbers. Additional scaling information is required to be signalled across the fronthaul interface to be able to recover the original modulated constellation points in the RU, but this only needs to be sent once per data section.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 3: User Plane Bandwidth Reduction Using Modulation Compression with Split 7-2x

Because modulation compression requires the in-phase and quadrature points to be perfectly aligned with the constellation grid it can only be used in the downlink.  However, when used, it decreases the bandwidth expansion ratio of Split 7-2x, where the expansion compared to Split 7-3 is now only due to the additional scaling and constellation shift information. This information is encoded as 4 octets and sent every data section, meaning the bandwidth expansion ratio will vary according to how many Physical Resource Blocks (PRBs) are included in each data section. This value can range from a single PRB up to 255 PRBs, with Table 3 showing the corresponding Split 7-2x bandwidth expansion ratio over Split 7-3 is effectively unity when operating using large data sections.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 3:  Bandwidth Expansion for Split 7-2x with Modulation Compression compared to Split 7-3

Note, even though modulation compression is only applicable to the downlink (DL), the shift of new frequency allocations to Time Division Duplex (TDD) enables a balancing of effective fronthaul throughput between uplink (UL) and downlink. For example, in LTE, 4 of the 7 possible TDD configurations have more slots allocated to downlink traffic, compared to 2 possible configuration that have more slots allocated in the uplink. Using a typical 12-to-6 DL/UL configuration, with 256-QAM and 10 PRBs per data section, the overall balance of bitrates for modulation compression in the downlink and block floating point compression in the uplink will be (1.03 x 12) to (2.33 x 6), or 12.40:13.98, i.e., resulting in a relatively balanced link as it relates to overall bandwidth.

A more comprehensive analysis by the O-RAN Alliance has examined control and user-plane scaling requirements for Split 7-2x with modulation compression and compared the figures with those for Split 7-3. When taking into account other overheads, this analysis indicated that the difference in downlink bandwidth between Split 7-3 and Split 7-2x with Modulation Compression was estimated to be around 7%. Using such analysis, it is evident why the O-RAN Alliance chose not to define a Split 7-3, instead advocating a converged approach based on Split 7-2x that can be used to address a variety of lower layer split deployment scenarios.

Comparing Split 7-2x and nFAPI


Material from the SCF clearly demonstrates that, in contrast to Split 7, their nFAPI/Split 6 approach is challenged in supporting massive MIMO functionality that is viewed as a key enabling technology for 5G deployments. However, massive MIMO is more applicable to outdoor macro-cellular coverage, where it can be used to handle high mobility and suppress cell-edge interference use cases. Hence, there may be a subset of 5G deployments where massive MIMO support is not required, so let’s compare the other attributes.

With both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split occurring below the HARQ processing in the MAC layer, both are constrained by exactly the same delay requirements as it relates to LTE HARQ processing and fronthaul transport budgets. Both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split permit the fronthaul traffic load to match the served cell traffic, enabling statistical multiplexing of traffic to be used within the fronthaul network. Both O-RAN’s Split 7-2x and SCF’s nFAPI/Split 6 support transport using a packet transport network between the Radio Unit and the Distributed Unit.

The managed object for the SCF’s Physical Network Function includes the ability for a single Physical Network Function to support multiple PNF Services. A PNF service can correspond to a cell, meaning that a PNF can be shared between multiple operators, whereby the PNF operator is responsible for provisioning the individual cells. This provides a foundation for implementing Neutral Host. More recently, the O-RAN Alliance’s Fronthaul Working Group has approved a work item to enhance the O-RAN lower layer split to support a “shared O-RAN Radio Unit” that can be parented to DUs from different operators, thus facilitating multi-operator deployment.

Both SCF and O-RAN Split 7-2x solutions have been influenced by the Distributed Antenna System (DAS) architectures that are the primary solution for bringing the RAN to indoor locations. The SCF leveraged the approach to DAS management when defining its approach to shared PNF operation. In contrast, O-RAN’s Split 7-2x has standardized enhanced “shared cell” functionality where multiple RUs are used in creating a single cell. This effectively uses the eCPRI based fronthaul to replicate functionality normally associated with digital DAS deployments.

Comparing fronthaul bandwidth requirements, it’s evident that  the 30-fold bandwidth expansion of CPRI was one of the main reasons for SCF to embark on its nFAPI specification program. However, the above analysis highlights how O-RAN has delivered important capabilities in its Split 7-2x to limit the necessary bandwidth expansion and avoid fragmentation of the lower layer split market between alternative split PHY approaches. Hence, the final aspect when comparing these alternatives is how much the bandwidth is expanded when going from Split 6 to Split 7-2x. Figure 1 illustrates that the bandwidth expansion between Split 6 and Split 7-3 is due to the operation of channel coding. With O-RAN having already estimated that Split 7-3 offers a 7% bandwidth savings compared to Split 7-2x with Modulation Compression, we can use the channel coding rate to estimate the bandwidth expansion between Split 6 and Split 7-2x. Table 4 uses typical LTE coding rates for 64QAM modulation to calculate the bandwidth expansion due to channel coding. This is combined with the additional 7% expansion due to Modulation Compression to estimate the differences in required bandwidth. This table shows that the difference in bandwidth between nFAPI/Split 6 and Split 7-2x is a function of channel coding rate and can be as high as 93% for 64QAM with 1/2 rate code, and as low as 16% for 64 QAM with 11/12 rate code.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 4: Example LTE 64QAM Channel Coding Bandwidth Expansion

Whereas the above analysis indicates that the cost of implementing the Channel Coding above the RU in Split 7-2x is a nominal increase in bandwidth, the benefit to such an approach is the significant simplification of the RU by removing the need to perform channel decoding. Critically, the channel decoder requires highly complex arithmetic and can become the bottleneck in physical layer processing. Often, this results in the use of dedicated hardware accelerators that can add significant complexity and cost to the nFAPI/Split 6 Radio Unit. In contrast, O-RAN’s split 7-2x allows the decoding functionality to be centralized, where it is expected that it can benefit from increased utilization and associated efficiencies, while simplifying the design of the O-RAN Radio Unit.

Source: cisco.com

Thursday, 15 April 2021

Get Hands-on Experience with Programmability & Edge Computing on a Cisco IoT Gateway

Are you still configuring your industrial router with CLI? Are you still getting network telemetry data with SNMP? Do you still use many industrial components when you can just have one single ruggedized IoT gateway that features an open edge-compute framework, cellular interfaces, and high-end industrial features?

Also Read: 200-201: Threat Hunting and Defending using Cisco Technologies for CyberOps (CBROPS)

Get ready to try out these features in an all-new learning lab and DevNet Sandbox featuring real IR1101 ruggedized hardware.

◉ Take me to the new learning lab

◉ Take me directly to the Industrial Networking and Edge Compute IR1101 Sandbox

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Architecture and feature overview of industrial networking and edge compute in the IR1101 Sandbox

The Industrial Router 1101


The Cisco IoT Gateway IR1101 delivers secure IoT connectivity for today and the future. Its 5G ready modular design allows you to upgrade to new communications protocols when they become available, avoiding costly rip-and-replace. Add or upgrade WAN, edge compute and storage components as technologies and your needs evolve. With its rugged hardware and compact form-factor, you can install it almost anywhere.

Here are a few examples of use cases for the IR1101

Utilities: Remotely manage thousands of miles of unmanned power grids between distribution substations and control centers. Improve power flow, Volt-VAR optimization, and fault detection and isolation, resulting in reduced outage durations and costs.

Public safety and transportation: The IR1101 provides redundant WAN connectivity for increased reliability. And with intelligence at the edge, you can accelerate decision making for mission-critical applications such as public safety, so you can better regulate traffic flow and detect traffic violations.

Oil and gas: Make decisions at the edge for faster response. Utilize cellular redundancy to manage thousands of miles of remote oil and gas pipelines to quickly identify and fix problems, limit downtime, and reduce costs.

WebUI & high-end industrial feature-set


Get familiar with the user-friendly on-box Device Manger (WebUI) as seen below. Users can easily navigate in their browser through the monitoring data, configuration and settings of their industrial device.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Graphical User interface on the IR1101

Of course, you can access as well many other specific industrial features via SSH Ruggedized like QoS, VPN, seamless integration to SCADA with Raw socket and DNP3 Serial/IP and IEC 60870 T101/T104 protocol translation.

IOx Edge Compute


Furthermore, it is possible to install containerized applications directly on the switch. Test now deploying your Docker containers / IOx applications on the ARM powered CPU of the IR1101. We have prepared a sample server application on the DevNet Code Exchange which you can download or build.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
On-boxed IOx Local Manager: Managing your IOx applications on the IR1101 – here NGINX server is installed and reachable on Port 8000

Device APIs NETCONF/RESTCONF & Model-Driven Telemetry


Since this switch series runs Cisco’s open and programmable operating system IOS XE, you can even configure the device via the device level APIs such as NETCONF/RESTCONF. This means for example that you can change any device configuration by simply running a Python script from your local machine and apply the changes on as many devices as you want.

Model-driven Telemetry (MDT) provides a mechanism to stream data from an MDT-capable device (=IR1101) to a destination (e.g. database and dashboard).

It uses a new approach for network monitoring in which data is streamed from network devices continuously using a publish/subscribe model and provides near real-time access to operational statistics for monitoring data. Applications can subscribe to specific data items they need, by using standards-based YANG data models over open protocols. Structured data is published at a defined cadence or on-change, based upon the subscription criteria and data type.

The operational data of the IR1101 is transmitted via gRPC (a high performance open-source universal RPC framework) to a 3rd party collector or receiver, in our example to a Telegraf/InfluxDB/Grafana stack.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Sample Grafana Dashboard in the sandbox: Near real-time monitoring of the CPU utilization on the IR1101 with model-driven telemetry

Source: cisco.com

Tuesday, 13 April 2021

Year 2020 and EWC – Embedded Wireless Controller on AP

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What a year 2020 was, and still what success for Cisco Embedded Wireless Controller!

Despite COVID-19 transforming our lives, despite the challenges of working in a virtual environment for many of us, the C9100 EWC had an excellent year.

We had many thousands of EWC software downloads, and the C9100 EWC Product Booking increased quarter after quarter.  We had more than 200 customers controlling 13K+ Access Points!

Let’s try to summarize some learnings from 2020 customer’s experience with EWC:

Why are customers so interested in EWC, how does EWC address their needs?

The short story: The EWC gives them full Catalyst 9800 experience while running in a Container on the Access Point itself.

The long story:  For small and medium businesses, EWC is the sweet spot to manage the wireless networks. It is simple to use, secure by design, and above all ready to grow once the business grows, due to its flexible architecture. Once your network grows beyond 100 APs, it can be easily migrated to an appliance Controller or a cloud-based Controller. Therefore it offers investment protection.

The EWC is supported on all 11ax APs, and the scale varies from 50 APs/1000 clients (C9105AXI, C9115AX, C91117AX) to 100 APs/2000 clients (C9120AX, C9130AX).  With such a scale, a medium site or a branch deployment is given the advantage of an integrated Wireless Controller. So no other physical hardware is needed.

What EWC features/capabilities are most sought by the customers?

The short story: The EWC is an all-in-one Controller, combining the best-in-class Cisco RF innovations of an 11ax Access Point with the advanced enterprise features of a Cisco Controller.

The long story: Firstly, the most appealing 11ax AP Cisco RF innovations for the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ RF Signature Capture provides superior security for mission-critical deployments.

◉ 11ax APs offer Zero-Wait, Dual Filter DFS (Dynamic Frequency Selection). 9120/9130 APs will use both client-radio and Cisco RF ASIC to detect radar and to virtually eliminate DFS false positives.

◉ Cisco APs implement aWIPS feature (adaptive Wireless Intrusion Prevention System). This is a threat detection and mitigation mechanism using signature-based techniques, traffic analysis, and device/topology information. It is a full infrastructure-integrated solution.

In addition, a list of EWC enterprise-ready features that customers are looking for:

◉ AAA Override on WLANs (SSIDs) – the administrator can configure the wireless network for RADIUS authentication and apply VLAN/QOS/ACLs to individual clients based on AAA attributes from the server.

◉ Full support for the latest WPA3 Security Standard and for Advanced Wireless Intrusion Prevention (aWIPS).

◉ AVC (Application Visibility and Control) – the administrator can rate limit/drop/mark traffic based on client application.

◉ Controller Redundancy – any 11ax AP could play the Active/Standby role. EWC has the flexibility to designate the preferred Standby Controller AP.

◉ Identify Apple iOS devices and apply prioritization of business applications for such clients.

◉ mDNS Gateway – forwarding Bonjour traffic by re-transmitting the traffic between reflection enabled VLANs.

◉ Integration with Cisco Umbrella for blocking malicious URLs, malware, and phishing exploits.

◉ Programmable interfaces with NETCONF/Yang for automation, configuration, and monitoring.

◉ Software Maintenance Upgrades (SMUs) can be applied to either Controller software or AP software.

Ok, we see a lot of interesting features, but with so many features, a certain degree of complexity is expected. The next question coming to mind is:

How about the ease of use of the EWC?

As per reports from the field, the device can be configured in eight minutes in Day-0 configuration using WebUI (Smart Dashboard) and mywifi.cisco.com URL.

The WebUI has been reported as being ‘very straightforward’.

There is no need to reboot the AP after Day-0 configuration is applied.

A quote from a third-party assessment (Miercom) says everything: “The Cisco EWC solution is one of the easiest wireless products to deploy that we’ve encountered to date.”

The user configures a shortlist of items in Day-0 (either in WebUI or in CLI): username/password, AP Profile, WLAN, wireless profile policy, and the default policy tag.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
An alternative to WebUI is the mobile app from either Google Play or Apple App Store. The app allows the user to bring up the device in Day-0, or to view the fleet of APs, the top list of clients, or any other wireless statistics.

The EWC WebUI is very similar to the 9800 WebUI, so a potential transition to an appliance-based Controller is seamless. Please see the snapshot below:

Trying yourself the EWC WebUI is the most convincing argument to demonstrate its ease of use.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What else did customers like in 2020 regarding EWC?

A couple of EWC deliverables in release 17.4 were welcome by the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ DNA License-free availability for EWC reduces the total cost of ownership, but it will still give customers the advantage of having the Network Essentials stack by default.

◉ New Access Point 9105 models (9105AXI, 9105AXW) give customers value options for their network deployment through EWC (9105AXI)

Regarding the new 9105 Access Points, the 11ax feature-set is rich: 2×2 MU-MIMO with two spatial streams, uplink/downlink OFDMA, TWT, BSS coloring, 802.11ax beamforming, 20/40/80 MHz channels.

9105AXI has a 1×1.0 mGig uplink interface, while the wall-mountable version (9105AXW) has 3×1.0 mGig interfaces, a USB port, and a Passthru port.

Next IOS-XE releases coming out in 2021 are already planning new and interesting features rolled out for EWC, please stay tuned!

Bottom line


EWC proved last year to be a simple, flexible, and secure platform of choice for small/medium business customers. The 2020 EWC customer adoption rate was growing continuously.

Source: cisco.com

Monday, 12 April 2021

What are you missing when you don’t enable global threat alerts?

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep

Network telemetry is a reservoir of data that, if tapped, can shed light on users’ behavioral patterns, weak spots in security, potentially malicious tools installed in enterprise environments, and even malware itself.

Global threat alerts (formerly Cognitive Threat Analytics known as CTA) is great at taking an enterprise’s network telemetry and running it through a pipeline of state-of-the-art machine learning and graph algorithms. After processing the traffic data in batch in a matter of hours, global threat alerts correlates all the user behaviors, assigns priorities, and groups detections intelligently, to give security analysts clarity into what the most important threats are in their network.

Smart alerts

All detections are presented in a context-rich manner, which gives users the ability to drill into the specific security events that support the threat detections grouped eventually into alerts. This is useful because just detecting potentially malicious traffic in your infrastructure isn’t enough; analysts need to build an understanding of each threat detection. This is where global threat alerts saves you time, investigating alerts and accelerating resolution.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 1: Extensive context helps security analysts understand why an alert was triggered and the reasons behind the conviction.

As depicted below in Figure 2, users can both change the severity levels of threats and rank high-priority asset groups from within the global threat alerts portal. This enables users to customize their settings to only alert them to the types of threats that their organizations are most concerned about, as well as to indicate which resources are most valuable. These settings allow the users to set proper context for threat alerts in their business environment.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 2: You change the priority of threats and asset groups from within the global threat alerts portal.

Global threat alerts are also presented in a more intuitive manner, with multiple threat detections grouped into one alert based on the following parameters:​

◉ Concurrent threats: Different threats that are occurring together.​

◉ Asset groups value: Group of threats occurring on endpoints that belong to asset groups with similar business value.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 3: Different threats that have been grouped together in one single alert, because they are all happening concurrently on the same assets.

Rich detection portfolio


Global threat alerts is continuously tracking and evolving hundreds of threat detections across various malware families, attack patterns, and tools used by malicious actors.

All these outcomes and detections are available for Encrypted Traffic Analytics telemetry (ETA) as well, which allows users to find threats in encrypted traffic without the need to decrypt that traffic. Moreover, because ETA telemetry contains more information than traditional NetFlow, the global threat alerts’ research team has also developed specific classifiers that are capable of finding additional threats in this data, such as with algorithms that are focused on detecting malicious patterns in the path and the query of a URL.

The global threat alerts’ research team is continuously engaged in dissecting new security threats and implementing the associated threat intelligence findings into hundreds of specialized classifiers. These classifiers are targeted at revealing campaigns that attackers are using on a global scale. Examples of these campaigns include the Maze ransomware and the njRAT remote access trojan. Numerous algorithms are also designed to capture generic malicious tactics like command-and-control traffic, command-injections, or lateral network movements.

Risk map of the internet


There are numerous algorithms focused on uncovering threat infrastructure in the network. These models are continuously discovering relationships between known malicious servers and new servers that have not yet been defined as malicious, but either share patterns or client bases with the known malicious servers. These models also constantly exchange newly identified threat intelligence with other Cisco security products and groups, such as Talos.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 4: Analyzing common users of known malicious infrastructure and unclassified servers, global threat alerts can uncover new malicious servers.

This complex approach of threat detection consists of multiple layers of machine learning algorithms to provide high-fidelity detections that are always up-to-date and relevant, as researchers are updating the machine models constantly. Additionally, all this computation is done in the cloud and utilizes only network telemetry data to derive new findings. The findings and alerts are presented to users in Secure Network Analytics and Secure Endpoint.

Global threat alerts uses state-of-the-art algorithms to provide high-fidelity, unique threat detections for north-south network traffic, Smart Alerts to help prioritize and accelerate resolutions, and a risk map to provide greater context and understanding of how threats span across the network.

Sunday, 11 April 2021

Cisco IOS XE – Past, Present, and Future

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career

From OS to Industry-leading Software Stack 

Cisco Internetwork Operating System (IOS) was developed in the 1980s for the company’s first routers that had only 256 KB of memory and low CPU processing power. But what a difference a few decades make. Today IOS-XE runs our entire enterprise portfolio ̶ ̶ 80 different Cisco platforms for access, distribution, core, wireless, and WAN, with a myriad of combinations of hardware and software, forwarding, and physical and virtual form factors.

Many people still call Cisco IOS XE an operating system. But it’s more appropriately described as an enterprise networking software stack. At 190 million lines of code from Cisco—and more than 300 million lines of code when you include vendor software development kits (SDKs) and open-source libraries—IOS XE is comparable to stacks from Microsoft or Apple.  

During the transition of IOS XE to encompass the entire enterprise networking portfolio, within every four-month release cycle our global development team of more than 3000 software engineers averaged the introduction of four new products. IOS-XE now supports more than 20 different ASIC families developed by Cisco and other vendors. We develop over 700 new features per year. It’s a huge undertaking to get this done systematically. It requires the right development environment and software engineering practices that scale the team to the amount of code necessary for our product portfolio. 

Here is a look back at how the IOS XE software stack was conceived and the continuous evolution of its capabilities, based on the work of the Polaris team. The team is tasked with providing the right development environment for the current portfolio and the evolving needs of the emerging new class of products. 

IOS Origins 

The early releases of IOS consisted of a single embedded development environment that included all the functionality required to build a product. Our success comes from managing the growth of functionality and scaling configuration models, scaling performance, scaling the hardware support in a systematic though embedded systems centric manner.  

In 2004, Cisco developers built IOS XE for the Cisco 1000 Series Aggregation Services Router (ASR 1000) router family. IOS XE combined a Linux kernel and independent processes to achieve separation of the control plane from the data plane. In the new code and development model we introduced, we began the journey of moving to a database–centric programming model. From the first shipment of ASR 1000, every state update to the data path goes into and out of the in-memory database. 

In 2014, the IOS XE development team was put together to drive the software strategy for Enterprise Networking. The entire switching portfolio moved to IOS-XE with the industry-leading Catalyst 9000 range of products. The pivot to evolving IOS XE into a distributed scale-out infrastructure relied on our deep experience of in-memory databases with database replication capabilities and a full, remotely accessible graph database. The elastic wireless controller 9800 represents the successful introduction of these new capabilities.  

When the IOS XE development team was formed, there was a common misconception that small, low-end systems with tiny footprints couldn‘t share the same software with very large-scale systems. We have successfully disproved that. IOS XE now runs on everything from tiny IoT routers to large modular systems. It is proving to be a significant strength as we move forward since the ability to fit on small systems means improved efficiency that translates to better outcomes on larger systems. What started as a challenge is now a transformational strength. 

Why is a Stack Important? 

An OS is only a very tiny part of the full functionality of a complete software development environment. The IOS XE enterprise networking software stack features a deep integration of all layers with a conceptual and semantic integrity.  

IOS XE software layers include application, software development language, middleware, managed runtime, graph database, transactional store, system libraries, drivers, and the Linux kernel. Our managed runtime enables common functionality to be rapidly deployed to a large amount of existing code seamlessly. The goal of the development environment is to facilitate cloud native, single control, and a monitoring point to operate at enterprise scale with fine-grained multi-tenancy everywhere. 

The great value in having the same software is that you have the same software development model that all developers follow. This represents the internal SDK for Cisco Enterprise Networking software engineers. All of our standards-based APIs are a single, often automated translation away. The ability to get total system visibility and control is vital in the days ahead to get to a networking system that does not look like a set of independent point solutions. 

What is IOS XE? 

There are many types of systems that can be built by different competent teams attempting to solve the same problem. The guiding themes behind IOS XE include: 

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
◉ Asynchronous end-to-end, because synchronous calls can be emulated, if necessary, but the reverse is not true. On low-footprint systems it is key to optimizing performance. 

◉ Cooperative scheduled run-to-completion is how all IOS XE code functions. It utilizes our experience developing IOS to provide the most CPU-efficient choice and the best model for strongly IO-bound workloads. 

◉ It’s a deterministic system that make the root cause of issues easier to fix and makes stateful process restart support easier to design. 

◉ A lossless system, IOS XE depends on end-to-end backpressure rather than any loss of information in processing layers. Reasoning about how a system functions in the presence of loss is impossible.  

◉ Its transactional nature produces a deep level of correctness across process restarts by reverting deterministically to a known stability point before a current inflight transaction started. This helps prevent fate sharing and crashes in other cooperating processes that work off the same database. 

◉ Formal domain specific languages provide specifications that permit build-time and runtime checking.  

◉ Close-loop behavior provides resiliency by imposing positive feedback control on developed systems instead of depending on “fire and forget” open loop behavior. 

During the last seven years of development, the IOS XE team via the Polaris project has focused on the following areas. 

Developing Our Own Managed Runtime Environment

The team has developed a managed runtime that essentially allows processes to run heap–less with state stored in the in-memory database. The Crimson toolchain provides language integrated support for the internal data modeling domain–specific language (DSL), known as  The Definition Language (TDL). The use of type-inferencing facilitates a succinct human interface to generated code for messaging and database access. The toolchain integration with language extensions also enables the rapid addition of new capabilities to migrate existing code to meet new expectations. Deep support for a systematic move to increasing multi-tenancy requirements are part of this development environment.   

Graph Query/Update Language

The Graph Execution Engine (GREEN) gives remote query and update capabilities to the graph database. It’s a formal way to interact natively using system software development kits (SDKs). All state transfer internally is fully automatic. Changes to state are efficiently tracked to allow incremental updates on persistent long-lived queries. 

Integrated Telemetry

The Polaris team has deeply integrated telemetry into the toolchain and managed runtime to avoid error-prone ad hoc telemetry. The separation of concerns between developers writing code and the automation of telemetry is vital to operate at Cisco scale. Standards-based telemetry is a one-level translation. Native telemetry runs at packet scale. 

Graph State Distribution Framework

The Graph State Distribution Framework allows location independence to processing by separating the data from the processing software. It’s a big step towards moving IOS XE from a message-passing system to a distributed database system. 

Compiler-integrated Patching

Compiler-integrated patching provides safe hot patching via the managed runtime, with script-generated Sev1/PSIRT patches, it is a level of automation that makes hot-patching available to every developer. The runtime application of patches does not require a restart. 

With a software stack like the newest generation of IOS XE, developers can add functionality to separate application logic from infrastructure components. The distributed database provides location independence to our software. The completeness and fidelity of the entire software stack allows for a deeply integrated and efficient developer experience.

Source: cisco.com