Friday 16 April 2021

Comparing Lower Layer Splits for Open Fronthaul Deployments

Introduction

The transition to open RAN (Radio Access Network) based on interoperable lower layer splits is gaining significant momentum across the mobile industry. However, where best to split the open RAN is a complex compromise between radio unit (RU) simplification, support of advanced co-ordinated multipoint RF capabilities, consequential requirements on the fronthaul transport, including limitations on transport delay budgets as well as bandwidth expansion. To help in comparing alternative options, different splits have been assigned numbers with higher numbers representing splits “lower down” in the protocol stack, meaning less functionality being deployed “below” the split in the RU. Lower layer splits occur below the medium access control (MAC) layer in the protocol stack, with options including Split 6 – between the MAC and physical (PHY) layers, Split 7 – within the physical layer, and Split 8 – between the physical layer and the RF functionality.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 1: Different Lower Layer Splits in the RAN Protocol Stack

This paper compares the two alternatives for realizing the lower layer split, the network functional application platform interface (nFAPI) Split 6 as defined by the Small Cell Forum (SCF) and the Split 7-2x as defined by the O-RAN Alliance.

Small Cell Splits


The Small Cell Forum took the initial lead in defining a multivendor lower layer split, taking its FAPI platform application programming interface (API) that had been used as an informative split of functionality between small cell silicon providers and the small cell RAN protocol stack providers, and enabling this to be “networked” over an IP transport. This “networked” FAPI, or nFAPI, enables the Physical Network Function (PNF) implementing the small cell RF and physical layer to be remotely located from the Virtual Network Function (VNF) implementing the small cell MAC layer and upper layer RAN protocols. First published by the SCF in 2016, the specification of the MAC/PHY split has since been labelled as “Split 6” by 3GPP TR38.801 that studied 5G’s New Radio access technology and architectures.

The initial SCF nFAPI program delivered important capabilities that enabled small cells to be virtualized, compared with the conventional macro-approach that at the time advocated using the Common Public Radio Interface (CPRI) defined split. CPRI had earlier specified an interface between a Radio Equipment Control (REC) element implementing the RAN baseband functions and a Radio Equipment (RE) element implementing the RF functions, to enable the RE to be located at the top of a cell tower and the REC to be located at the base of the cell tower. This interface was subsequently repurposed to support relocation of the REC to a centralized location that could serve multiple cell towers via a fronthaul transport network.

Importantly, when comparing the transport bandwidth requirements for the fronthaul interface, nFAPI/Split 6 does not significantly expand the bandwidth required compared to more conventional small cell backhaul deployments. Moreover, just like the backhaul traffic, the nFAPI transport bandwidth is able to vary according to served traffic, enabling statistical multiplexing to be used over the fronthaul IP network. This can be contrasted with the alternative CPRI split, also referred to as “Split 8” in TR38.801, that requires bandwidth expansion up to 30-fold and a constant bit rate connection, even if there is no traffic being served in a cell.

HARQ Latency Constraints


Whereas nFAPI/Split 6 offers significant benefits over CPRI/Split 8 in terms of bandwidth expansion, both splits are below the hybrid automatic repeat request (HARQ) functionality in the MAC layer that is responsible for constraining the transport delay budget for LTE fronthaul solutions. Both LTE-based Split 6 and Split 8 have a common delay constraint equivalent to 3 milliseconds between when up-link data is received at the radio to the time when the corresponding down-link ACK/NAK needs to be ready to be transmitted at the radio. These 3 milliseconds need to be allocated to HARQ processing and transport, with a common assumption being that 2.5 milliseconds are allocated to processing, leaving 0.5 milliseconds allocated to round trip transport. This results in the oft-quoted delay requirement of 0.25 milliseconds for one way transport delay budget between the radio and the element implementing the MAC layer’s up-link HARQ functionality.

The Small Cell Forum acknowledges such limitations when using its nFAPI/Split 6. Because the 0.25 milliseconds round trip transport budget may severely constrain nFAPI deployments, SCF defines the use of HARQ interleaving that uses standardized signaling to defer HARQ buffer emptying, enabling higher latency fronthaul links to be accommodated. Although HARQ interleaving buys additional transport delay budget, the operation has a severe impact on single UE throughput; as soon as the delay budget exceeds the constraint described above, the per UE maximum throughput is immediately decreased by 50%, with further decreases as delays in the transport network increase.

Importantly, 5G New Radio does not implement the same synchronous up-link HARQ procedures and therefore does not suffer the same transport delay constraints. Instead, the limiting factor constraining the transport budget in 5G fronthaul systems is the operation of the windowing during the random access procedure. Depending on the operation of other vendor specific control loops, e.g., associated with channel estimation, this may enable increased fronthaul delay budgets to be used in 5G deployments.

O-RAN Alliance


The O-RAN Alliance published its “7-2x” Split 7 specification in February 2019. All Split 7 alternatives offer significant benefits over the legacy CPRI/Split 8, avoiding Split 8 requirements to scale fronthaul bandwidth on a per antenna basis, resulting in significant lower fronthaul transport bandwidth requirements, as well introducing transport bandwidth requirements that vary with served traffic in the cell. Moreover, when compared to Split 6, the O-RAN lower layer Split 7-2x supports all advanced RF combining techniques, including the higher order multiple-input, multiple-output (MIMO) capability that is viewed as a key enabling technology for 5G deployments, as shown in Table 1, that can be used to contrast Split 6 “MAC/PHY” with Split 7 “Split PHY” based architectures.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 1: Comparing Advanced RF Combining Capabilities of Lower Layer Splits

However, instead of supporting individual transport channels over the nFAPI interface,  Split 7-2x defines the transport of frequency domain IQ defined spatial streams or MIMO layers across the lower layer fronthaul interface. The use of frequency domain IQ symbols can lead to a significant increase in fronthaul bandwidth when compared to the original transport channels. Figure 2 illustrates the bandwidth expansion due to Split 7-2 occurring “below” the modulation function, where the original 4 bits to be transmitted are expanded to over 18 bits after 16-QAM modulation, even when using a block floating point compression scheme.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career

Figure 2: Bandwidth Expansion with Block Floating Point Compressed Split 7-2x

The bandwidth expansion is a function of the modulation scheme, with higher expansion required for lower order modulation, as shown in Table 2.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 2: Bandwidth Expansion for Split 7-2x with Block Floating Point Compression compared to Split 7-3

Such a bandwidth expansion was one of the reasons that proponents of the so called Split 7-3 advocated a split that occurred “above” the modulation/demodulation function. In order to address such issues, and the possible fragmentation of different Split 7 solutions, the O-RAN Alliance lower layer split includes the definition of a technique termed modulation compression. The operation of modulation compression of a 16-QAM modulated waveform is illustrated in Figure 3. The conventional Split 7-2 modulated constellation diagram is shifted to enable the modulation points to lie on a grid that then allows the I and Q components to be represented in binary instead of floating point numbers. Additional scaling information is required to be signalled across the fronthaul interface to be able to recover the original modulated constellation points in the RU, but this only needs to be sent once per data section.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 3: User Plane Bandwidth Reduction Using Modulation Compression with Split 7-2x

Because modulation compression requires the in-phase and quadrature points to be perfectly aligned with the constellation grid it can only be used in the downlink.  However, when used, it decreases the bandwidth expansion ratio of Split 7-2x, where the expansion compared to Split 7-3 is now only due to the additional scaling and constellation shift information. This information is encoded as 4 octets and sent every data section, meaning the bandwidth expansion ratio will vary according to how many Physical Resource Blocks (PRBs) are included in each data section. This value can range from a single PRB up to 255 PRBs, with Table 3 showing the corresponding Split 7-2x bandwidth expansion ratio over Split 7-3 is effectively unity when operating using large data sections.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 3:  Bandwidth Expansion for Split 7-2x with Modulation Compression compared to Split 7-3

Note, even though modulation compression is only applicable to the downlink (DL), the shift of new frequency allocations to Time Division Duplex (TDD) enables a balancing of effective fronthaul throughput between uplink (UL) and downlink. For example, in LTE, 4 of the 7 possible TDD configurations have more slots allocated to downlink traffic, compared to 2 possible configuration that have more slots allocated in the uplink. Using a typical 12-to-6 DL/UL configuration, with 256-QAM and 10 PRBs per data section, the overall balance of bitrates for modulation compression in the downlink and block floating point compression in the uplink will be (1.03 x 12) to (2.33 x 6), or 12.40:13.98, i.e., resulting in a relatively balanced link as it relates to overall bandwidth.

A more comprehensive analysis by the O-RAN Alliance has examined control and user-plane scaling requirements for Split 7-2x with modulation compression and compared the figures with those for Split 7-3. When taking into account other overheads, this analysis indicated that the difference in downlink bandwidth between Split 7-3 and Split 7-2x with Modulation Compression was estimated to be around 7%. Using such analysis, it is evident why the O-RAN Alliance chose not to define a Split 7-3, instead advocating a converged approach based on Split 7-2x that can be used to address a variety of lower layer split deployment scenarios.

Comparing Split 7-2x and nFAPI


Material from the SCF clearly demonstrates that, in contrast to Split 7, their nFAPI/Split 6 approach is challenged in supporting massive MIMO functionality that is viewed as a key enabling technology for 5G deployments. However, massive MIMO is more applicable to outdoor macro-cellular coverage, where it can be used to handle high mobility and suppress cell-edge interference use cases. Hence, there may be a subset of 5G deployments where massive MIMO support is not required, so let’s compare the other attributes.

With both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split occurring below the HARQ processing in the MAC layer, both are constrained by exactly the same delay requirements as it relates to LTE HARQ processing and fronthaul transport budgets. Both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split permit the fronthaul traffic load to match the served cell traffic, enabling statistical multiplexing of traffic to be used within the fronthaul network. Both O-RAN’s Split 7-2x and SCF’s nFAPI/Split 6 support transport using a packet transport network between the Radio Unit and the Distributed Unit.

The managed object for the SCF’s Physical Network Function includes the ability for a single Physical Network Function to support multiple PNF Services. A PNF service can correspond to a cell, meaning that a PNF can be shared between multiple operators, whereby the PNF operator is responsible for provisioning the individual cells. This provides a foundation for implementing Neutral Host. More recently, the O-RAN Alliance’s Fronthaul Working Group has approved a work item to enhance the O-RAN lower layer split to support a “shared O-RAN Radio Unit” that can be parented to DUs from different operators, thus facilitating multi-operator deployment.

Both SCF and O-RAN Split 7-2x solutions have been influenced by the Distributed Antenna System (DAS) architectures that are the primary solution for bringing the RAN to indoor locations. The SCF leveraged the approach to DAS management when defining its approach to shared PNF operation. In contrast, O-RAN’s Split 7-2x has standardized enhanced “shared cell” functionality where multiple RUs are used in creating a single cell. This effectively uses the eCPRI based fronthaul to replicate functionality normally associated with digital DAS deployments.

Comparing fronthaul bandwidth requirements, it’s evident that  the 30-fold bandwidth expansion of CPRI was one of the main reasons for SCF to embark on its nFAPI specification program. However, the above analysis highlights how O-RAN has delivered important capabilities in its Split 7-2x to limit the necessary bandwidth expansion and avoid fragmentation of the lower layer split market between alternative split PHY approaches. Hence, the final aspect when comparing these alternatives is how much the bandwidth is expanded when going from Split 6 to Split 7-2x. Figure 1 illustrates that the bandwidth expansion between Split 6 and Split 7-3 is due to the operation of channel coding. With O-RAN having already estimated that Split 7-3 offers a 7% bandwidth savings compared to Split 7-2x with Modulation Compression, we can use the channel coding rate to estimate the bandwidth expansion between Split 6 and Split 7-2x. Table 4 uses typical LTE coding rates for 64QAM modulation to calculate the bandwidth expansion due to channel coding. This is combined with the additional 7% expansion due to Modulation Compression to estimate the differences in required bandwidth. This table shows that the difference in bandwidth between nFAPI/Split 6 and Split 7-2x is a function of channel coding rate and can be as high as 93% for 64QAM with 1/2 rate code, and as low as 16% for 64 QAM with 11/12 rate code.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 4: Example LTE 64QAM Channel Coding Bandwidth Expansion

Whereas the above analysis indicates that the cost of implementing the Channel Coding above the RU in Split 7-2x is a nominal increase in bandwidth, the benefit to such an approach is the significant simplification of the RU by removing the need to perform channel decoding. Critically, the channel decoder requires highly complex arithmetic and can become the bottleneck in physical layer processing. Often, this results in the use of dedicated hardware accelerators that can add significant complexity and cost to the nFAPI/Split 6 Radio Unit. In contrast, O-RAN’s split 7-2x allows the decoding functionality to be centralized, where it is expected that it can benefit from increased utilization and associated efficiencies, while simplifying the design of the O-RAN Radio Unit.

Source: cisco.com

Thursday 15 April 2021

Get Hands-on Experience with Programmability & Edge Computing on a Cisco IoT Gateway

Are you still configuring your industrial router with CLI? Are you still getting network telemetry data with SNMP? Do you still use many industrial components when you can just have one single ruggedized IoT gateway that features an open edge-compute framework, cellular interfaces, and high-end industrial features?

Also Read: 200-201: Threat Hunting and Defending using Cisco Technologies for CyberOps (CBROPS)

Get ready to try out these features in an all-new learning lab and DevNet Sandbox featuring real IR1101 ruggedized hardware.

◉ Take me to the new learning lab

◉ Take me directly to the Industrial Networking and Edge Compute IR1101 Sandbox

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Architecture and feature overview of industrial networking and edge compute in the IR1101 Sandbox

The Industrial Router 1101


The Cisco IoT Gateway IR1101 delivers secure IoT connectivity for today and the future. Its 5G ready modular design allows you to upgrade to new communications protocols when they become available, avoiding costly rip-and-replace. Add or upgrade WAN, edge compute and storage components as technologies and your needs evolve. With its rugged hardware and compact form-factor, you can install it almost anywhere.

Here are a few examples of use cases for the IR1101

Utilities: Remotely manage thousands of miles of unmanned power grids between distribution substations and control centers. Improve power flow, Volt-VAR optimization, and fault detection and isolation, resulting in reduced outage durations and costs.

Public safety and transportation: The IR1101 provides redundant WAN connectivity for increased reliability. And with intelligence at the edge, you can accelerate decision making for mission-critical applications such as public safety, so you can better regulate traffic flow and detect traffic violations.

Oil and gas: Make decisions at the edge for faster response. Utilize cellular redundancy to manage thousands of miles of remote oil and gas pipelines to quickly identify and fix problems, limit downtime, and reduce costs.

WebUI & high-end industrial feature-set


Get familiar with the user-friendly on-box Device Manger (WebUI) as seen below. Users can easily navigate in their browser through the monitoring data, configuration and settings of their industrial device.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Graphical User interface on the IR1101

Of course, you can access as well many other specific industrial features via SSH Ruggedized like QoS, VPN, seamless integration to SCADA with Raw socket and DNP3 Serial/IP and IEC 60870 T101/T104 protocol translation.

IOx Edge Compute


Furthermore, it is possible to install containerized applications directly on the switch. Test now deploying your Docker containers / IOx applications on the ARM powered CPU of the IR1101. We have prepared a sample server application on the DevNet Code Exchange which you can download or build.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
On-boxed IOx Local Manager: Managing your IOx applications on the IR1101 – here NGINX server is installed and reachable on Port 8000

Device APIs NETCONF/RESTCONF & Model-Driven Telemetry


Since this switch series runs Cisco’s open and programmable operating system IOS XE, you can even configure the device via the device level APIs such as NETCONF/RESTCONF. This means for example that you can change any device configuration by simply running a Python script from your local machine and apply the changes on as many devices as you want.

Model-driven Telemetry (MDT) provides a mechanism to stream data from an MDT-capable device (=IR1101) to a destination (e.g. database and dashboard).

It uses a new approach for network monitoring in which data is streamed from network devices continuously using a publish/subscribe model and provides near real-time access to operational statistics for monitoring data. Applications can subscribe to specific data items they need, by using standards-based YANG data models over open protocols. Structured data is published at a defined cadence or on-change, based upon the subscription criteria and data type.

The operational data of the IR1101 is transmitted via gRPC (a high performance open-source universal RPC framework) to a 3rd party collector or receiver, in our example to a Telegraf/InfluxDB/Grafana stack.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Sample Grafana Dashboard in the sandbox: Near real-time monitoring of the CPU utilization on the IR1101 with model-driven telemetry

Source: cisco.com

Tuesday 13 April 2021

Year 2020 and EWC – Embedded Wireless Controller on AP

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What a year 2020 was, and still what success for Cisco Embedded Wireless Controller!

Despite COVID-19 transforming our lives, despite the challenges of working in a virtual environment for many of us, the C9100 EWC had an excellent year.

We had many thousands of EWC software downloads, and the C9100 EWC Product Booking increased quarter after quarter.  We had more than 200 customers controlling 13K+ Access Points!

Let’s try to summarize some learnings from 2020 customer’s experience with EWC:

Why are customers so interested in EWC, how does EWC address their needs?

The short story: The EWC gives them full Catalyst 9800 experience while running in a Container on the Access Point itself.

The long story:  For small and medium businesses, EWC is the sweet spot to manage the wireless networks. It is simple to use, secure by design, and above all ready to grow once the business grows, due to its flexible architecture. Once your network grows beyond 100 APs, it can be easily migrated to an appliance Controller or a cloud-based Controller. Therefore it offers investment protection.

The EWC is supported on all 11ax APs, and the scale varies from 50 APs/1000 clients (C9105AXI, C9115AX, C91117AX) to 100 APs/2000 clients (C9120AX, C9130AX).  With such a scale, a medium site or a branch deployment is given the advantage of an integrated Wireless Controller. So no other physical hardware is needed.

What EWC features/capabilities are most sought by the customers?

The short story: The EWC is an all-in-one Controller, combining the best-in-class Cisco RF innovations of an 11ax Access Point with the advanced enterprise features of a Cisco Controller.

The long story: Firstly, the most appealing 11ax AP Cisco RF innovations for the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ RF Signature Capture provides superior security for mission-critical deployments.

◉ 11ax APs offer Zero-Wait, Dual Filter DFS (Dynamic Frequency Selection). 9120/9130 APs will use both client-radio and Cisco RF ASIC to detect radar and to virtually eliminate DFS false positives.

◉ Cisco APs implement aWIPS feature (adaptive Wireless Intrusion Prevention System). This is a threat detection and mitigation mechanism using signature-based techniques, traffic analysis, and device/topology information. It is a full infrastructure-integrated solution.

In addition, a list of EWC enterprise-ready features that customers are looking for:

◉ AAA Override on WLANs (SSIDs) – the administrator can configure the wireless network for RADIUS authentication and apply VLAN/QOS/ACLs to individual clients based on AAA attributes from the server.

◉ Full support for the latest WPA3 Security Standard and for Advanced Wireless Intrusion Prevention (aWIPS).

◉ AVC (Application Visibility and Control) – the administrator can rate limit/drop/mark traffic based on client application.

◉ Controller Redundancy – any 11ax AP could play the Active/Standby role. EWC has the flexibility to designate the preferred Standby Controller AP.

◉ Identify Apple iOS devices and apply prioritization of business applications for such clients.

◉ mDNS Gateway – forwarding Bonjour traffic by re-transmitting the traffic between reflection enabled VLANs.

◉ Integration with Cisco Umbrella for blocking malicious URLs, malware, and phishing exploits.

◉ Programmable interfaces with NETCONF/Yang for automation, configuration, and monitoring.

◉ Software Maintenance Upgrades (SMUs) can be applied to either Controller software or AP software.

Ok, we see a lot of interesting features, but with so many features, a certain degree of complexity is expected. The next question coming to mind is:

How about the ease of use of the EWC?

As per reports from the field, the device can be configured in eight minutes in Day-0 configuration using WebUI (Smart Dashboard) and mywifi.cisco.com URL.

The WebUI has been reported as being ‘very straightforward’.

There is no need to reboot the AP after Day-0 configuration is applied.

A quote from a third-party assessment (Miercom) says everything: “The Cisco EWC solution is one of the easiest wireless products to deploy that we’ve encountered to date.”

The user configures a shortlist of items in Day-0 (either in WebUI or in CLI): username/password, AP Profile, WLAN, wireless profile policy, and the default policy tag.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
An alternative to WebUI is the mobile app from either Google Play or Apple App Store. The app allows the user to bring up the device in Day-0, or to view the fleet of APs, the top list of clients, or any other wireless statistics.

The EWC WebUI is very similar to the 9800 WebUI, so a potential transition to an appliance-based Controller is seamless. Please see the snapshot below:

Trying yourself the EWC WebUI is the most convincing argument to demonstrate its ease of use.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What else did customers like in 2020 regarding EWC?

A couple of EWC deliverables in release 17.4 were welcome by the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ DNA License-free availability for EWC reduces the total cost of ownership, but it will still give customers the advantage of having the Network Essentials stack by default.

◉ New Access Point 9105 models (9105AXI, 9105AXW) give customers value options for their network deployment through EWC (9105AXI)

Regarding the new 9105 Access Points, the 11ax feature-set is rich: 2×2 MU-MIMO with two spatial streams, uplink/downlink OFDMA, TWT, BSS coloring, 802.11ax beamforming, 20/40/80 MHz channels.

9105AXI has a 1×1.0 mGig uplink interface, while the wall-mountable version (9105AXW) has 3×1.0 mGig interfaces, a USB port, and a Passthru port.

Next IOS-XE releases coming out in 2021 are already planning new and interesting features rolled out for EWC, please stay tuned!

Bottom line


EWC proved last year to be a simple, flexible, and secure platform of choice for small/medium business customers. The 2020 EWC customer adoption rate was growing continuously.

Source: cisco.com

Monday 12 April 2021

What are you missing when you don’t enable global threat alerts?

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep

Network telemetry is a reservoir of data that, if tapped, can shed light on users’ behavioral patterns, weak spots in security, potentially malicious tools installed in enterprise environments, and even malware itself.

Global threat alerts (formerly Cognitive Threat Analytics known as CTA) is great at taking an enterprise’s network telemetry and running it through a pipeline of state-of-the-art machine learning and graph algorithms. After processing the traffic data in batch in a matter of hours, global threat alerts correlates all the user behaviors, assigns priorities, and groups detections intelligently, to give security analysts clarity into what the most important threats are in their network.

Smart alerts

All detections are presented in a context-rich manner, which gives users the ability to drill into the specific security events that support the threat detections grouped eventually into alerts. This is useful because just detecting potentially malicious traffic in your infrastructure isn’t enough; analysts need to build an understanding of each threat detection. This is where global threat alerts saves you time, investigating alerts and accelerating resolution.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 1: Extensive context helps security analysts understand why an alert was triggered and the reasons behind the conviction.

As depicted below in Figure 2, users can both change the severity levels of threats and rank high-priority asset groups from within the global threat alerts portal. This enables users to customize their settings to only alert them to the types of threats that their organizations are most concerned about, as well as to indicate which resources are most valuable. These settings allow the users to set proper context for threat alerts in their business environment.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 2: You change the priority of threats and asset groups from within the global threat alerts portal.

Global threat alerts are also presented in a more intuitive manner, with multiple threat detections grouped into one alert based on the following parameters:​

◉ Concurrent threats: Different threats that are occurring together.​

◉ Asset groups value: Group of threats occurring on endpoints that belong to asset groups with similar business value.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 3: Different threats that have been grouped together in one single alert, because they are all happening concurrently on the same assets.

Rich detection portfolio


Global threat alerts is continuously tracking and evolving hundreds of threat detections across various malware families, attack patterns, and tools used by malicious actors.

All these outcomes and detections are available for Encrypted Traffic Analytics telemetry (ETA) as well, which allows users to find threats in encrypted traffic without the need to decrypt that traffic. Moreover, because ETA telemetry contains more information than traditional NetFlow, the global threat alerts’ research team has also developed specific classifiers that are capable of finding additional threats in this data, such as with algorithms that are focused on detecting malicious patterns in the path and the query of a URL.

The global threat alerts’ research team is continuously engaged in dissecting new security threats and implementing the associated threat intelligence findings into hundreds of specialized classifiers. These classifiers are targeted at revealing campaigns that attackers are using on a global scale. Examples of these campaigns include the Maze ransomware and the njRAT remote access trojan. Numerous algorithms are also designed to capture generic malicious tactics like command-and-control traffic, command-injections, or lateral network movements.

Risk map of the internet


There are numerous algorithms focused on uncovering threat infrastructure in the network. These models are continuously discovering relationships between known malicious servers and new servers that have not yet been defined as malicious, but either share patterns or client bases with the known malicious servers. These models also constantly exchange newly identified threat intelligence with other Cisco security products and groups, such as Talos.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 4: Analyzing common users of known malicious infrastructure and unclassified servers, global threat alerts can uncover new malicious servers.

This complex approach of threat detection consists of multiple layers of machine learning algorithms to provide high-fidelity detections that are always up-to-date and relevant, as researchers are updating the machine models constantly. Additionally, all this computation is done in the cloud and utilizes only network telemetry data to derive new findings. The findings and alerts are presented to users in Secure Network Analytics and Secure Endpoint.

Global threat alerts uses state-of-the-art algorithms to provide high-fidelity, unique threat detections for north-south network traffic, Smart Alerts to help prioritize and accelerate resolutions, and a risk map to provide greater context and understanding of how threats span across the network.

Sunday 11 April 2021

Cisco IOS XE – Past, Present, and Future

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career

From OS to Industry-leading Software Stack 

Cisco Internetwork Operating System (IOS) was developed in the 1980s for the company’s first routers that had only 256 KB of memory and low CPU processing power. But what a difference a few decades make. Today IOS-XE runs our entire enterprise portfolio ̶ ̶ 80 different Cisco platforms for access, distribution, core, wireless, and WAN, with a myriad of combinations of hardware and software, forwarding, and physical and virtual form factors.

Many people still call Cisco IOS XE an operating system. But it’s more appropriately described as an enterprise networking software stack. At 190 million lines of code from Cisco—and more than 300 million lines of code when you include vendor software development kits (SDKs) and open-source libraries—IOS XE is comparable to stacks from Microsoft or Apple.  

During the transition of IOS XE to encompass the entire enterprise networking portfolio, within every four-month release cycle our global development team of more than 3000 software engineers averaged the introduction of four new products. IOS-XE now supports more than 20 different ASIC families developed by Cisco and other vendors. We develop over 700 new features per year. It’s a huge undertaking to get this done systematically. It requires the right development environment and software engineering practices that scale the team to the amount of code necessary for our product portfolio. 

Here is a look back at how the IOS XE software stack was conceived and the continuous evolution of its capabilities, based on the work of the Polaris team. The team is tasked with providing the right development environment for the current portfolio and the evolving needs of the emerging new class of products. 

IOS Origins 

The early releases of IOS consisted of a single embedded development environment that included all the functionality required to build a product. Our success comes from managing the growth of functionality and scaling configuration models, scaling performance, scaling the hardware support in a systematic though embedded systems centric manner.  

In 2004, Cisco developers built IOS XE for the Cisco 1000 Series Aggregation Services Router (ASR 1000) router family. IOS XE combined a Linux kernel and independent processes to achieve separation of the control plane from the data plane. In the new code and development model we introduced, we began the journey of moving to a database–centric programming model. From the first shipment of ASR 1000, every state update to the data path goes into and out of the in-memory database. 

In 2014, the IOS XE development team was put together to drive the software strategy for Enterprise Networking. The entire switching portfolio moved to IOS-XE with the industry-leading Catalyst 9000 range of products. The pivot to evolving IOS XE into a distributed scale-out infrastructure relied on our deep experience of in-memory databases with database replication capabilities and a full, remotely accessible graph database. The elastic wireless controller 9800 represents the successful introduction of these new capabilities.  

When the IOS XE development team was formed, there was a common misconception that small, low-end systems with tiny footprints couldn‘t share the same software with very large-scale systems. We have successfully disproved that. IOS XE now runs on everything from tiny IoT routers to large modular systems. It is proving to be a significant strength as we move forward since the ability to fit on small systems means improved efficiency that translates to better outcomes on larger systems. What started as a challenge is now a transformational strength. 

Why is a Stack Important? 

An OS is only a very tiny part of the full functionality of a complete software development environment. The IOS XE enterprise networking software stack features a deep integration of all layers with a conceptual and semantic integrity.  

IOS XE software layers include application, software development language, middleware, managed runtime, graph database, transactional store, system libraries, drivers, and the Linux kernel. Our managed runtime enables common functionality to be rapidly deployed to a large amount of existing code seamlessly. The goal of the development environment is to facilitate cloud native, single control, and a monitoring point to operate at enterprise scale with fine-grained multi-tenancy everywhere. 

The great value in having the same software is that you have the same software development model that all developers follow. This represents the internal SDK for Cisco Enterprise Networking software engineers. All of our standards-based APIs are a single, often automated translation away. The ability to get total system visibility and control is vital in the days ahead to get to a networking system that does not look like a set of independent point solutions. 

What is IOS XE? 

There are many types of systems that can be built by different competent teams attempting to solve the same problem. The guiding themes behind IOS XE include: 

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
◉ Asynchronous end-to-end, because synchronous calls can be emulated, if necessary, but the reverse is not true. On low-footprint systems it is key to optimizing performance. 

◉ Cooperative scheduled run-to-completion is how all IOS XE code functions. It utilizes our experience developing IOS to provide the most CPU-efficient choice and the best model for strongly IO-bound workloads. 

◉ It’s a deterministic system that make the root cause of issues easier to fix and makes stateful process restart support easier to design. 

◉ A lossless system, IOS XE depends on end-to-end backpressure rather than any loss of information in processing layers. Reasoning about how a system functions in the presence of loss is impossible.  

◉ Its transactional nature produces a deep level of correctness across process restarts by reverting deterministically to a known stability point before a current inflight transaction started. This helps prevent fate sharing and crashes in other cooperating processes that work off the same database. 

◉ Formal domain specific languages provide specifications that permit build-time and runtime checking.  

◉ Close-loop behavior provides resiliency by imposing positive feedback control on developed systems instead of depending on “fire and forget” open loop behavior. 

During the last seven years of development, the IOS XE team via the Polaris project has focused on the following areas. 

Developing Our Own Managed Runtime Environment

The team has developed a managed runtime that essentially allows processes to run heap–less with state stored in the in-memory database. The Crimson toolchain provides language integrated support for the internal data modeling domain–specific language (DSL), known as  The Definition Language (TDL). The use of type-inferencing facilitates a succinct human interface to generated code for messaging and database access. The toolchain integration with language extensions also enables the rapid addition of new capabilities to migrate existing code to meet new expectations. Deep support for a systematic move to increasing multi-tenancy requirements are part of this development environment.   

Graph Query/Update Language

The Graph Execution Engine (GREEN) gives remote query and update capabilities to the graph database. It’s a formal way to interact natively using system software development kits (SDKs). All state transfer internally is fully automatic. Changes to state are efficiently tracked to allow incremental updates on persistent long-lived queries. 

Integrated Telemetry

The Polaris team has deeply integrated telemetry into the toolchain and managed runtime to avoid error-prone ad hoc telemetry. The separation of concerns between developers writing code and the automation of telemetry is vital to operate at Cisco scale. Standards-based telemetry is a one-level translation. Native telemetry runs at packet scale. 

Graph State Distribution Framework

The Graph State Distribution Framework allows location independence to processing by separating the data from the processing software. It’s a big step towards moving IOS XE from a message-passing system to a distributed database system. 

Compiler-integrated Patching

Compiler-integrated patching provides safe hot patching via the managed runtime, with script-generated Sev1/PSIRT patches, it is a level of automation that makes hot-patching available to every developer. The runtime application of patches does not require a restart. 

With a software stack like the newest generation of IOS XE, developers can add functionality to separate application logic from infrastructure components. The distributed database provides location independence to our software. The completeness and fidelity of the entire software stack allows for a deeply integrated and efficient developer experience.

Source: cisco.com

Saturday 10 April 2021

Embrace the Future with Open Optical Networking

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career

Until recently, optical systems have been closed and proprietary. They come as a package that includes optics, transponders, a line system, and a management system. In the traditional optical architecture, these components were provided by a single vendor, and interfaces between those functions were closed and proprietary. While the concept of disaggregated or open optical components is not new, some components can now be optimized and sold separately. This enables providers to assemble a system themselves in the manner they choose.

There are several reasons why an operator would move in this direction. In most cases, it’s to enable a multi-vendor solution where you can mix and match devices from different vendors with the expectation that you have access to the latest and greatest innovation that the broad industry provides. This certainly aligns with the disaggregation trends we’ve seen in networks with software and white boxes and provides the benefits of access to the latest innovative technology for best-of-breed platforms.

By contrast, in an open Dense Wavelength Division Multiplexing (DWDM) architecture, we essentially have a disaggregated system – functional disaggregation, hardware and software, disaggregation to full system disaggregation. In this open model, all the components can potentially be managed (e.g. configured, monitored, and even automated) through a common software layer with the use of standard APIs and data models.

When looking at open architectures, an open line system from a network design point of view must support an “alien wavelength.” An alien wavelength is one that is transported transparently over a third-party line system or infrastructure. Alien waves enable the ability to add capacity to address increased bandwidth needs with no disruption of the current network in place. And the most important benefit of alien waves is the freedom it gives network operators to source their transponders from any vendor based on their business or technical criteria.

This is particularly important when you consider that transponders represent the majority of the cost of a DWDM system and are a key component in determining the overall efficiency of the network. This provides the operator with increased flexibility to deploy the next wavelength from any vendor that’s best-in-class.

Whether a provider continues with a fully closed system or a disaggregated approach depends on their network today and where they have a vision to go in the future.

When is a closed optical system beneficial?

◉ When network operators are looking for a turnkey solution. It’s pre-integrated, and the responsibility for fixing problems is very clear.

◉ When operators are willing to trade first cost (Optical Line System) for transponder cost, resulting in a pay-as-you-grow solution, but with a higher total cost of ownership.

When is an open (multi-vendor) optical system beneficial?

◉ When operators want to choose from all the industry has to offer. Best-in-breed is based on the operator’s definition – best OSNR performance, highest spectral efficiency, lowest power, least amount of space, lowest cost per bit, pluggability for router/switch integration, or standardization.

◉ By opening the architecture, competition and innovation are stimulated. This provides the operator with more choice.

◉ When the ability to leverage standardized APIs is available to create a consistent operational model across vendors.

Use cases for open networking

◉ The subsea market pushed for “open cables,” which enabled any vendor’s transponder to operate over a third-party line system already in place. This helped many operators increase their capacity on the subsea cable by moving to the latest transponder in the market.

◉ The long-haul market has already implemented open line systems, enabling multi-vendor leverage over a common infrastructure. In some cases, this has resulted in more than three vendors being deployed.

◉ Metro use cases, like Open ROADM, take standardization a step further with the ability to have multiple line system vendors working with coherent interface vendors on different ends of the same fiber and wavelength.

What about optics?

Datacenter interconnect, metro, and regional markets will be transformed with 400G OpenZR+ Digital Coherent Optics (DCO), because they have been standardized to insert into any optical, router, or switch platform. This plug-and-play option has never existed before and opens the optical networking market for DCO optics to be deployed in a ubiquitous manner based on the standards. Several options are listed in the diagram below, including the 400G QSFP-DD, which is either the Optical Internetworking Forum (OIF) 400G ZR; or the OpenZR+ (which supports Open Reconfigurable Optical Add-Drop Multiplexer (ROADM) on the line side); or the Open ROADM, which is a CFP2 format.

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career

Standardization


There are several industry initiatives that will accelerate the adoption of open networking for optical systems. Open ROADM is a Multi-Source Agreement (MSA), which is an agreement between vendors to follow a common set of specifications. It’s supported by a group of 28 companies, including system and component vendors, as well as major operators across the globe.

There’s also the Telecom Infra Project (TIP), which is another MSA that focuses on specifications for point-to-point open line systems. TIP also started an initiative to define a common algorithm that can be used for optical network design and path computation, something impossible to do in closed and proprietary systems. There’s a group within TIP that’s also working on GNPy, which stands for Gaussian Noise modeling in Python and provides algorithms for route feasibility and analysis for optical networks. It does the Optical Signal to Noise Ratio (OSNR) calculations to validate if an optical channel is feasible through a given path in the network. This is a very promising initiative, and there are large carriers worldwide that are using it to model real-life networks.

The next one is OpenConfig, which is an industry working group that focuses on producing common data models based on Yet Another Next Generation (YANG) language for device management and configuration. It’s widely used by webscale companies, and it covers multiple technologies – routing, switching, and optical.

Other industry specifications include the ITU Telecommunication Standardization Sector (ITU-T) that defines the DWDM grid and interface specifications, Forward Error Correction (FEC) and digital wrappers, and the OIF, which defines specifications for DWDM interfaces.

Finally, the most important proof point for any industry initiative is network operator adoption. We already see strong interest and deployment of open optical systems, broad support for the industry initiatives mentioned above, and rapid adoption of the industry specifications that they are producing.

Source: cisco.com

Friday 9 April 2021

See Why Developers and Security Can Now See Eye-to-Eye

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Meet Alice, she is a developer at a fast-growing company that creates a face filter app. What is Alice’s worst fear? Seeing her competitor launch the newest filter into market first.  Their security team lead, Bob, would have probably hoped that her worst fear would be writing vulnerable code. More often than not, however, this is not top of mind for developers like Alice. So, as you’d expect, Alice and Bob sometimes have difficulties communicating with each other, due to different goals and drivers. Think speed vs. risk aversion.

In this blog we will walk through some awesome new features within AppDynamics with Cisco Secure Application. What we will do is simulate a Remote Code Execution (RCE) attack and what response Bob can take to help Alice launch her application quickly and with security top of mind.

What is a Remote Code Execution attack? What is the impact?


A RCE attack is an attacker’s ability to remotely execute arbitrary commands or code on a target machine or in a target process. Such a RCE vulnerability is an obvious security flaw in applications, and somewhat bad news to Alice, but much worse news to Bob (who is responsible for the security around this app). A program that is designed to exploit such a vulnerability is called an arbitrary RCE exploit. There are many libraries that developers use when developing their apps. Many of those have vulnerabilities in them.

Now, what can be the impact of such an attack? Imagine that a malicious actor, Eve, can execute arbitrary code commands inside of your application, without being physically present. Imagine Eve being able to read and write into your database, or take your application offline. Now you might have thought that you are safe, since you migrated your apps to the public cloud: how could anyone get in there? Well, with application-level attacks (like RCE) this is unfortunately still possible. So how can we have the comfort of the public cloud, but also visibility and control like never before?

AppDynamics with Cisco Secure Application


Cisco Secure Application protects applications at runtime, detects and block attacks in real-time, and simplifies the lifecycle of security incidents by providing application and business context. This creates a shared “language” across app and security teams, and makes it easier for them to communicate. It is natively built into AppDynamics Java agent (more languages to follow) and embeds security into the application runtime without adding performance overhead. Let’s have a look at our remote execution attack via the eyes of Bob, our AppSec expert, who is testing out Cisco Secure Application!

Below you can see the Vulnerabilities tab in the dashboard. Important here is that you can see the CVE with associated severity, but what’s more is that you can also see the status: has it been fixed or not. This is especially valuable information when triaging and prioritizing work. We can now focus on what still needs to be fixed first, and then check the others for potential compromises.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Secure App goes even further than this: we can also notice these 2 exclamation mark symbols, the first indicating that an exploit is possible for this CVE, and then second that a someone tried to compromise this vulnerability! Has Eve been able to do bad stuff in our application? We will need to act even faster on this vulnerability!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

When we click on this line, we are shown more detailed information about this vulnerability: as we can see this CVE-2017-5639 is a flaw in Apache Struts with incorrect exception handling, which allows remote attackers to execute arbitrary command via HTTP headers. Recognize this type of attack? Yes, it is indeed the worst nightmare of our AppSec manager Bob, and this has actually been done as well!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

We have to find out more about this compromise, so we can click on the attack and this will drill down further. What we can see now is truly amazing if we compare this to other classical security tools. Not only can Bob see the affected app, the affected service and the vulnerable library, Bob can also see the actual misused Java method and the stack trace!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

When Bob checks out the stack trace you can actually scroll through the node’s entire stack trace and associated errors. This can be essential when investigating what has happened, and if certain database calls have been made.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Now when Bob checks out the details of this page, you can see the command that has been tried to execute, the method name and working directory. As you can see, Eve had tried to show the contents of the /etc/passwd file! Was Eve able to see into this precious file?

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

In Cisco Secure Application, you can set policy in either Detect or Block mode. Luckily, we can see that this action was blocked by the policy used (lowest policy in list). This was good thinking of Bob! Using all of the gathered information, Bob can now show exactly what needs to be changed in Alice’s code. Secure App is now providing a common tool which both parties understand. Alice and Bob worked happily ever after.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Source: cisco.com