Sunday 18 April 2021

Bring Your Broadband Network Gateways into the Cloud

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

With average fixed broadband speeds projected to peak up to 110+ Mbps and the number of devices connected to IP networks ballooning to 29+ billion (more than three times the global population by 2023), Internet growth remains unabated and could even be stronger as the ongoing pandemic makes the internet more critical than ever to our daily lives, defining a new normal for humanity – video conferences replaced physical meetings, virtual “happy hours” with coworkers and friends replaced get-togethers, and online classrooms have immersed children in new methods of learning.

Shouldering the weight of these new digital experiences, communication service providers are experiencing a significant increase in traffic as well as a change in traffic patterns while struggling with average revenue per user (ARPU) trending flat to down. They need to reimagine their network architectures to deliver wireline services in a more cost-efficient manner. With the average revenue per user (ARPU) flat or declining, network architectures must evolve to deliver cost-efficient wireline services.

Responsible for critical subscriber management functions and a key component of any wireline services’ architecture, the broadband network gateway (BNG) has historically been placed at centralized edge locations. Unfortunately, these locations don’t provide the best balance between the user plane and the control plane’s performance requirements. The user plane (also known as the forwarding plane) scale is tied to the bandwidth per subscriber, while the control plane scale depends on the number of subscriber sessions and services provided for end-users. In most situations what happens is that either the control plane or the user plane ends up being either over or underutilized.

For years, the limited number of services per end-user and moderate bandwidth per user allowed network designers to roll out BNG devices that supported both user plane and control plane on the same device because minimal optimization was required. But today, with the exponential growth in traffic, subscribers, and services fueled by consumers’ appetite for new digital experiences, the traditional BNG architecture facing some severe limitations.

Given the changing needs and requirements, it is no longer possible to optimize the user plane and control plane when hosted on the same device. And it’s not scalable, making it difficult to support bandwidth or customer growth, control costs, and manage complexity with more and more BNG deployments. It is time to entirely rethink the BNG architecture.

Cloud Native Broadband Network Gateway

To overcome these operational challenges and right-size the economics, Cisco has developed a cloud native BNG (cnBNG) with control and user plane separation (CUPS) – an important architectural shift to enable a more agile, scalable, and cost-efficient network.

This new architecture simplifies network operations and enables independent location, scaling, and life cycle management of the control plane and the user plane. With the CUPS architecture, the control plane can be placed in a centralized data center, scaled as needed, and it can manage multiple user plane instances. Cloud native control planes provide agility and speed up the introduction of new service introduction using advanced automation. Communication Service Providers (CSPs) can now roll out leaner user plane instances (without control plane related subscriber management functions) closer to end-users, guaranteeing latency, and avoiding the unnecessary and costly transport of bandwidth-hungry services over core networks, Thereby, they can place Content Distribution Network (CDN’s) deeper into the network, enabling peering offload at the edge of the network hence delivering a better end-user experience.

There are also other benefits. A cloud native infrastructure provides cost-effective redundancy models that prevent cnBNG outages, minimizing the impact on broadband users. And, a cloud-native control plane lets communication service providers adopt continuous integration of new features, without impacting the user plane which remains isolated from these changes. As a result, operations are eased, thanks to a centralized control plane with well-defined APIs to facilitate the insertion into OSS/BSS systems.

When compared to a conventional BNG architecture, Cisco cloud native BNG architecture brings significant benefits:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career
1. A clean slate Fixed Mobile Convergence (FMC) ready architecture as the control plane is built from the ground-up with cloud-native tenets, integrating the subscriber management infrastructure components across domains (wireless, wireline, and cable)

2. Multiple levels of redundancy both at the user plane and control plane level

3. Optimized user plane choices for different deployment models at pre-aggregation and aggregation layers for converged services

4. Investment protection as an existing physical BNG can be used as user planes for cnBNG

5. Granular subscriber visibility using streaming telemetry and mass-scale automation, thanks to extensive Yang models and KPIs streamed via telemetry, enabling real-time API interaction with back-end systems

6. A Pay-as-you-grow model allows customers to purchase the user planes network capacity, as needed

Analysis has shown that these benefits translate into up to 55% Total Cost of Ownership (TCO) savings.

An Architecture Aligned to Standards

This past June, the Broadband Forum published a technical report on Control and User Plane Separation for a disaggregated BNG – the TR-459 – that notably defines the interfaces and the requirements for both control and user planes. Three CUPS interfaces are defined – the State Control Interface (SCi), the Control Packet Redirect Interface (CPRi), and the Management Interface (Mi).

With convergence in mind, the Broadband Forum has selected the Packet Forwarding Control Protocol (PFCP) defined by 3GPP for CUPS as the SCi protocol. It is a well-established protocol especially for subscriber management. Whereas the TR-459 is not yet fully mature, Cisco’s current cnBNG implementation is already aligned to it.

On the Road to Full Convergence

Historically, wireline, wireless, and cable subscriber management solutions have been deployed as siloed, centralized monolithic systems. Now, a common, cloud-native control plane can work with wireline, wireless, and cable access user planes paving the way to a universal, 5G core, converged subscriber management solution capable of delivering hybrid services. And Network Functions (NF’s) that are part of the common cloud-native control plane, not only share the subscriber management infrastructure, they also provide a consistent interface for policy management, automation, and service assurance systems.

Read More: 500-450: Implementing and Supporting Cisco Unified Contact Center Enterprise (UCCEIS)

Moving forward, CSPs can envision a complete convergence of policy layer and other north-bound systems, all the way up to the communication service provider’s IT systems.

With a converged model in place, customers can consume services and applications from the access technology of their choice, with a consistent experience. And communication service providers can pivot to a model with unified support services, and monitoring/activation systems, while creating sticky service bundles, as more end-user devices are tied to a single service, increasing  customer retention.

Cisco is uniquely positioned to help customers embrace this new architecture with a strong end-to-end ecosystem of converged subscriber management across mobile, wireline, and cable, in addition to, a fully integrated telco cloud stack across compute, storage, software defined fabric, and cloud automation.

Source: cisco.com

Saturday 17 April 2021

100-490 RSTECH Free Exam Questions & Answers | CCT Routing and Switching Exam Syllabus


Cisco RSTECH Exam Description:

The Supporting Cisco Routing and Switching Network Devices v3.0 (RSTECH 100-490) is a 90-minute, 60-70 question exam associated with Cisco Certified Technician Routing and Switching certification. The course, Supporting Cisco Routing and Switching Network Devices v3.0, helps candidates prepare for this exam.

Cisco 100-490 Exam Overview:

Your workforce is ready – but is your workplace?

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material

We’re heading back to the office!!

It won’t happen overnight – but the signs are increasingly positive that we are on our way back. Some companies, like Cisco and Google, have begun encouraging a phased return to the office, once the situation permits. Personally, I can’t wait to be in the same physical space as my colleagues, as well as meeting our customers face-to-face.

Like most of you, I desire the flexibility to choose where I work. Based on our recent global workforce survey, only 9% expect to be in the office 100% of the time. That means IT will need to deliver a consistently secure and engaging experience across both in-office and remote work environments.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 1. The future of work is hybrid

Networking teams need to prepare for the hybrid workplace.


Some people tend to be more prepared than others. Personally, I like to err on the side of being over-prepared. Most network professionals I know are of a similar mindset. So, what does that mean when it comes to the return to the office?

◉ Employee concerns: From our global workplace survey we learned that 95% of workers are uncomfortable about that return due to fears of contracting COVID-19. Leading the concerns at 64% is not wanting to touch shared office devices, closely followed by concerns over riding in a crowded elevator (62%), and sharing a desk (61%).

◉ Business concerns: While businesses need to provide safe and secure work environments, requiring new efforts and solutions, they must also try to mitigate costs and capture savings. One primary approach is by using office space more efficiently. Our survey results show that 53% are already looking at options to reduce office space, while 96% indicate they need intelligent workplace technology to improve work environments

Where to start your return to the office


So, what’s on your mind as we head back to the office? According to IDC, the biggest permanent networking changes you are making resulting from COVID are the integration of networking and security management (32%); improved support for remote workers (30%); and improved network automation, visibility, and analytics (28%). So how can you address these priorities as we head back to the office?


If your car had been sitting unused in the driveway for a year, the first thing you’d do is get it serviced. Likewise, your campus and branch networks need to be put through their paces. Utilization is minimal right now, so it is a great time to see what improvements can be made.

◉ Reimagine Connections: Digital business begins with connectivity, so you can’t take it for granted. You can start by making sure your wired and wireless network can support an imminent return to work. With hybrid work, everyone will have video to the desktop. Will your network performance deliver the experience users love? And make sure it is set up to enhance your employees’ safety and work experience with social density and proximity monitoring, workspace efficiency, and smart building IoT requirements.

◉ Reinforce Security: This is the time to automate security policy management, micro-segmentation, and zero-trust access so that any device or user is automatically authenticated and authorized access only to those resources it’s approved for.

◉ Redefine IT Experience: Make it easy on your team and your business. With automation and AI-enabled analytics technologies, AIOps is now a reality for network operations too. All the tools are available to achieve pre-emptive troubleshooting and remediation from “user/device-to-application” – wherever either is located.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 2. Choose your access networking journey

Here are some valuable tips from our Cisco networking team that are making the necessary preparations for our own Cisco campuses.


According to our survey, 58% of workers will continue to work from home at least 8 days a month. That means that you need to continue investing efforts in optimizing the experience for those workers. Many of them are still complaining that their work experience is not optimal. According to IDC, 55% of work from home users complain of multiple problems a week, while 50% complain of problems with audio on video conferences.

◉ Work from Home: Deliver plug-and-play provisioning and policy automation that allows your remote employees to easily and securely connect to the corporate network without setting up a VPN.

◉ Home Office: For those that want to turn their home network into a “branch of one” you can create a zero-trust fabric with end-to-end segmentation and always-on network connectivity that provides an enhanced multicloud application experience.


The evolution to a hybrid workforce, together with the accelerated move to the cloud and edge applications, has led to the perfect storm that demands a new approach for IT to deliver a secure user experience regardless of where users and applications are located. This new approach is being offered by a combination of SD-WAN and cloud security technologies that have been termed Secure Access Service Edge or SASE. It’s estimated that 40% of enterprises will have explicit strategies to adopt SASE by 2024

◉ SD-WAN: Out of the multiple ways to get started with adopting a full SASE architecture – I would propose that SD-WAN is a wise choice. It offers a secure, mature and efficient way to access both SaaS and IaaS environments, with multiple deployment and security options.

◉ SASE: Evolving to a full SASE architecture combines networking and security functions in the cloud to deliver secure access to applications, anywhere users work. Combining with SD-WAN, this includes security service, including firewall as a service, secure web gateway (SWG)s, cloud access security broker (CASB), and zero trust network access (ZTNA).

Source: cisco.com    

Friday 16 April 2021

Comparing Lower Layer Splits for Open Fronthaul Deployments

Introduction

The transition to open RAN (Radio Access Network) based on interoperable lower layer splits is gaining significant momentum across the mobile industry. However, where best to split the open RAN is a complex compromise between radio unit (RU) simplification, support of advanced co-ordinated multipoint RF capabilities, consequential requirements on the fronthaul transport, including limitations on transport delay budgets as well as bandwidth expansion. To help in comparing alternative options, different splits have been assigned numbers with higher numbers representing splits “lower down” in the protocol stack, meaning less functionality being deployed “below” the split in the RU. Lower layer splits occur below the medium access control (MAC) layer in the protocol stack, with options including Split 6 – between the MAC and physical (PHY) layers, Split 7 – within the physical layer, and Split 8 – between the physical layer and the RF functionality.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 1: Different Lower Layer Splits in the RAN Protocol Stack

This paper compares the two alternatives for realizing the lower layer split, the network functional application platform interface (nFAPI) Split 6 as defined by the Small Cell Forum (SCF) and the Split 7-2x as defined by the O-RAN Alliance.

Small Cell Splits


The Small Cell Forum took the initial lead in defining a multivendor lower layer split, taking its FAPI platform application programming interface (API) that had been used as an informative split of functionality between small cell silicon providers and the small cell RAN protocol stack providers, and enabling this to be “networked” over an IP transport. This “networked” FAPI, or nFAPI, enables the Physical Network Function (PNF) implementing the small cell RF and physical layer to be remotely located from the Virtual Network Function (VNF) implementing the small cell MAC layer and upper layer RAN protocols. First published by the SCF in 2016, the specification of the MAC/PHY split has since been labelled as “Split 6” by 3GPP TR38.801 that studied 5G’s New Radio access technology and architectures.

The initial SCF nFAPI program delivered important capabilities that enabled small cells to be virtualized, compared with the conventional macro-approach that at the time advocated using the Common Public Radio Interface (CPRI) defined split. CPRI had earlier specified an interface between a Radio Equipment Control (REC) element implementing the RAN baseband functions and a Radio Equipment (RE) element implementing the RF functions, to enable the RE to be located at the top of a cell tower and the REC to be located at the base of the cell tower. This interface was subsequently repurposed to support relocation of the REC to a centralized location that could serve multiple cell towers via a fronthaul transport network.

Importantly, when comparing the transport bandwidth requirements for the fronthaul interface, nFAPI/Split 6 does not significantly expand the bandwidth required compared to more conventional small cell backhaul deployments. Moreover, just like the backhaul traffic, the nFAPI transport bandwidth is able to vary according to served traffic, enabling statistical multiplexing to be used over the fronthaul IP network. This can be contrasted with the alternative CPRI split, also referred to as “Split 8” in TR38.801, that requires bandwidth expansion up to 30-fold and a constant bit rate connection, even if there is no traffic being served in a cell.

HARQ Latency Constraints


Whereas nFAPI/Split 6 offers significant benefits over CPRI/Split 8 in terms of bandwidth expansion, both splits are below the hybrid automatic repeat request (HARQ) functionality in the MAC layer that is responsible for constraining the transport delay budget for LTE fronthaul solutions. Both LTE-based Split 6 and Split 8 have a common delay constraint equivalent to 3 milliseconds between when up-link data is received at the radio to the time when the corresponding down-link ACK/NAK needs to be ready to be transmitted at the radio. These 3 milliseconds need to be allocated to HARQ processing and transport, with a common assumption being that 2.5 milliseconds are allocated to processing, leaving 0.5 milliseconds allocated to round trip transport. This results in the oft-quoted delay requirement of 0.25 milliseconds for one way transport delay budget between the radio and the element implementing the MAC layer’s up-link HARQ functionality.

The Small Cell Forum acknowledges such limitations when using its nFAPI/Split 6. Because the 0.25 milliseconds round trip transport budget may severely constrain nFAPI deployments, SCF defines the use of HARQ interleaving that uses standardized signaling to defer HARQ buffer emptying, enabling higher latency fronthaul links to be accommodated. Although HARQ interleaving buys additional transport delay budget, the operation has a severe impact on single UE throughput; as soon as the delay budget exceeds the constraint described above, the per UE maximum throughput is immediately decreased by 50%, with further decreases as delays in the transport network increase.

Importantly, 5G New Radio does not implement the same synchronous up-link HARQ procedures and therefore does not suffer the same transport delay constraints. Instead, the limiting factor constraining the transport budget in 5G fronthaul systems is the operation of the windowing during the random access procedure. Depending on the operation of other vendor specific control loops, e.g., associated with channel estimation, this may enable increased fronthaul delay budgets to be used in 5G deployments.

O-RAN Alliance


The O-RAN Alliance published its “7-2x” Split 7 specification in February 2019. All Split 7 alternatives offer significant benefits over the legacy CPRI/Split 8, avoiding Split 8 requirements to scale fronthaul bandwidth on a per antenna basis, resulting in significant lower fronthaul transport bandwidth requirements, as well introducing transport bandwidth requirements that vary with served traffic in the cell. Moreover, when compared to Split 6, the O-RAN lower layer Split 7-2x supports all advanced RF combining techniques, including the higher order multiple-input, multiple-output (MIMO) capability that is viewed as a key enabling technology for 5G deployments, as shown in Table 1, that can be used to contrast Split 6 “MAC/PHY” with Split 7 “Split PHY” based architectures.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 1: Comparing Advanced RF Combining Capabilities of Lower Layer Splits

However, instead of supporting individual transport channels over the nFAPI interface,  Split 7-2x defines the transport of frequency domain IQ defined spatial streams or MIMO layers across the lower layer fronthaul interface. The use of frequency domain IQ symbols can lead to a significant increase in fronthaul bandwidth when compared to the original transport channels. Figure 2 illustrates the bandwidth expansion due to Split 7-2 occurring “below” the modulation function, where the original 4 bits to be transmitted are expanded to over 18 bits after 16-QAM modulation, even when using a block floating point compression scheme.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career

Figure 2: Bandwidth Expansion with Block Floating Point Compressed Split 7-2x

The bandwidth expansion is a function of the modulation scheme, with higher expansion required for lower order modulation, as shown in Table 2.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 2: Bandwidth Expansion for Split 7-2x with Block Floating Point Compression compared to Split 7-3

Such a bandwidth expansion was one of the reasons that proponents of the so called Split 7-3 advocated a split that occurred “above” the modulation/demodulation function. In order to address such issues, and the possible fragmentation of different Split 7 solutions, the O-RAN Alliance lower layer split includes the definition of a technique termed modulation compression. The operation of modulation compression of a 16-QAM modulated waveform is illustrated in Figure 3. The conventional Split 7-2 modulated constellation diagram is shifted to enable the modulation points to lie on a grid that then allows the I and Q components to be represented in binary instead of floating point numbers. Additional scaling information is required to be signalled across the fronthaul interface to be able to recover the original modulated constellation points in the RU, but this only needs to be sent once per data section.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 3: User Plane Bandwidth Reduction Using Modulation Compression with Split 7-2x

Because modulation compression requires the in-phase and quadrature points to be perfectly aligned with the constellation grid it can only be used in the downlink.  However, when used, it decreases the bandwidth expansion ratio of Split 7-2x, where the expansion compared to Split 7-3 is now only due to the additional scaling and constellation shift information. This information is encoded as 4 octets and sent every data section, meaning the bandwidth expansion ratio will vary according to how many Physical Resource Blocks (PRBs) are included in each data section. This value can range from a single PRB up to 255 PRBs, with Table 3 showing the corresponding Split 7-2x bandwidth expansion ratio over Split 7-3 is effectively unity when operating using large data sections.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 3:  Bandwidth Expansion for Split 7-2x with Modulation Compression compared to Split 7-3

Note, even though modulation compression is only applicable to the downlink (DL), the shift of new frequency allocations to Time Division Duplex (TDD) enables a balancing of effective fronthaul throughput between uplink (UL) and downlink. For example, in LTE, 4 of the 7 possible TDD configurations have more slots allocated to downlink traffic, compared to 2 possible configuration that have more slots allocated in the uplink. Using a typical 12-to-6 DL/UL configuration, with 256-QAM and 10 PRBs per data section, the overall balance of bitrates for modulation compression in the downlink and block floating point compression in the uplink will be (1.03 x 12) to (2.33 x 6), or 12.40:13.98, i.e., resulting in a relatively balanced link as it relates to overall bandwidth.

A more comprehensive analysis by the O-RAN Alliance has examined control and user-plane scaling requirements for Split 7-2x with modulation compression and compared the figures with those for Split 7-3. When taking into account other overheads, this analysis indicated that the difference in downlink bandwidth between Split 7-3 and Split 7-2x with Modulation Compression was estimated to be around 7%. Using such analysis, it is evident why the O-RAN Alliance chose not to define a Split 7-3, instead advocating a converged approach based on Split 7-2x that can be used to address a variety of lower layer split deployment scenarios.

Comparing Split 7-2x and nFAPI


Material from the SCF clearly demonstrates that, in contrast to Split 7, their nFAPI/Split 6 approach is challenged in supporting massive MIMO functionality that is viewed as a key enabling technology for 5G deployments. However, massive MIMO is more applicable to outdoor macro-cellular coverage, where it can be used to handle high mobility and suppress cell-edge interference use cases. Hence, there may be a subset of 5G deployments where massive MIMO support is not required, so let’s compare the other attributes.

With both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split occurring below the HARQ processing in the MAC layer, both are constrained by exactly the same delay requirements as it relates to LTE HARQ processing and fronthaul transport budgets. Both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split permit the fronthaul traffic load to match the served cell traffic, enabling statistical multiplexing of traffic to be used within the fronthaul network. Both O-RAN’s Split 7-2x and SCF’s nFAPI/Split 6 support transport using a packet transport network between the Radio Unit and the Distributed Unit.

The managed object for the SCF’s Physical Network Function includes the ability for a single Physical Network Function to support multiple PNF Services. A PNF service can correspond to a cell, meaning that a PNF can be shared between multiple operators, whereby the PNF operator is responsible for provisioning the individual cells. This provides a foundation for implementing Neutral Host. More recently, the O-RAN Alliance’s Fronthaul Working Group has approved a work item to enhance the O-RAN lower layer split to support a “shared O-RAN Radio Unit” that can be parented to DUs from different operators, thus facilitating multi-operator deployment.

Both SCF and O-RAN Split 7-2x solutions have been influenced by the Distributed Antenna System (DAS) architectures that are the primary solution for bringing the RAN to indoor locations. The SCF leveraged the approach to DAS management when defining its approach to shared PNF operation. In contrast, O-RAN’s Split 7-2x has standardized enhanced “shared cell” functionality where multiple RUs are used in creating a single cell. This effectively uses the eCPRI based fronthaul to replicate functionality normally associated with digital DAS deployments.

Comparing fronthaul bandwidth requirements, it’s evident that  the 30-fold bandwidth expansion of CPRI was one of the main reasons for SCF to embark on its nFAPI specification program. However, the above analysis highlights how O-RAN has delivered important capabilities in its Split 7-2x to limit the necessary bandwidth expansion and avoid fragmentation of the lower layer split market between alternative split PHY approaches. Hence, the final aspect when comparing these alternatives is how much the bandwidth is expanded when going from Split 6 to Split 7-2x. Figure 1 illustrates that the bandwidth expansion between Split 6 and Split 7-3 is due to the operation of channel coding. With O-RAN having already estimated that Split 7-3 offers a 7% bandwidth savings compared to Split 7-2x with Modulation Compression, we can use the channel coding rate to estimate the bandwidth expansion between Split 6 and Split 7-2x. Table 4 uses typical LTE coding rates for 64QAM modulation to calculate the bandwidth expansion due to channel coding. This is combined with the additional 7% expansion due to Modulation Compression to estimate the differences in required bandwidth. This table shows that the difference in bandwidth between nFAPI/Split 6 and Split 7-2x is a function of channel coding rate and can be as high as 93% for 64QAM with 1/2 rate code, and as low as 16% for 64 QAM with 11/12 rate code.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 4: Example LTE 64QAM Channel Coding Bandwidth Expansion

Whereas the above analysis indicates that the cost of implementing the Channel Coding above the RU in Split 7-2x is a nominal increase in bandwidth, the benefit to such an approach is the significant simplification of the RU by removing the need to perform channel decoding. Critically, the channel decoder requires highly complex arithmetic and can become the bottleneck in physical layer processing. Often, this results in the use of dedicated hardware accelerators that can add significant complexity and cost to the nFAPI/Split 6 Radio Unit. In contrast, O-RAN’s split 7-2x allows the decoding functionality to be centralized, where it is expected that it can benefit from increased utilization and associated efficiencies, while simplifying the design of the O-RAN Radio Unit.

Source: cisco.com

Thursday 15 April 2021

Get Hands-on Experience with Programmability & Edge Computing on a Cisco IoT Gateway

Are you still configuring your industrial router with CLI? Are you still getting network telemetry data with SNMP? Do you still use many industrial components when you can just have one single ruggedized IoT gateway that features an open edge-compute framework, cellular interfaces, and high-end industrial features?

Also Read: 200-201: Threat Hunting and Defending using Cisco Technologies for CyberOps (CBROPS)

Get ready to try out these features in an all-new learning lab and DevNet Sandbox featuring real IR1101 ruggedized hardware.

◉ Take me to the new learning lab

◉ Take me directly to the Industrial Networking and Edge Compute IR1101 Sandbox

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Architecture and feature overview of industrial networking and edge compute in the IR1101 Sandbox

The Industrial Router 1101


The Cisco IoT Gateway IR1101 delivers secure IoT connectivity for today and the future. Its 5G ready modular design allows you to upgrade to new communications protocols when they become available, avoiding costly rip-and-replace. Add or upgrade WAN, edge compute and storage components as technologies and your needs evolve. With its rugged hardware and compact form-factor, you can install it almost anywhere.

Here are a few examples of use cases for the IR1101

Utilities: Remotely manage thousands of miles of unmanned power grids between distribution substations and control centers. Improve power flow, Volt-VAR optimization, and fault detection and isolation, resulting in reduced outage durations and costs.

Public safety and transportation: The IR1101 provides redundant WAN connectivity for increased reliability. And with intelligence at the edge, you can accelerate decision making for mission-critical applications such as public safety, so you can better regulate traffic flow and detect traffic violations.

Oil and gas: Make decisions at the edge for faster response. Utilize cellular redundancy to manage thousands of miles of remote oil and gas pipelines to quickly identify and fix problems, limit downtime, and reduce costs.

WebUI & high-end industrial feature-set


Get familiar with the user-friendly on-box Device Manger (WebUI) as seen below. Users can easily navigate in their browser through the monitoring data, configuration and settings of their industrial device.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Graphical User interface on the IR1101

Of course, you can access as well many other specific industrial features via SSH Ruggedized like QoS, VPN, seamless integration to SCADA with Raw socket and DNP3 Serial/IP and IEC 60870 T101/T104 protocol translation.

IOx Edge Compute


Furthermore, it is possible to install containerized applications directly on the switch. Test now deploying your Docker containers / IOx applications on the ARM powered CPU of the IR1101. We have prepared a sample server application on the DevNet Code Exchange which you can download or build.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
On-boxed IOx Local Manager: Managing your IOx applications on the IR1101 – here NGINX server is installed and reachable on Port 8000

Device APIs NETCONF/RESTCONF & Model-Driven Telemetry


Since this switch series runs Cisco’s open and programmable operating system IOS XE, you can even configure the device via the device level APIs such as NETCONF/RESTCONF. This means for example that you can change any device configuration by simply running a Python script from your local machine and apply the changes on as many devices as you want.

Model-driven Telemetry (MDT) provides a mechanism to stream data from an MDT-capable device (=IR1101) to a destination (e.g. database and dashboard).

It uses a new approach for network monitoring in which data is streamed from network devices continuously using a publish/subscribe model and provides near real-time access to operational statistics for monitoring data. Applications can subscribe to specific data items they need, by using standards-based YANG data models over open protocols. Structured data is published at a defined cadence or on-change, based upon the subscription criteria and data type.

The operational data of the IR1101 is transmitted via gRPC (a high performance open-source universal RPC framework) to a 3rd party collector or receiver, in our example to a Telegraf/InfluxDB/Grafana stack.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Guides
Sample Grafana Dashboard in the sandbox: Near real-time monitoring of the CPU utilization on the IR1101 with model-driven telemetry

Source: cisco.com

Tuesday 13 April 2021

Year 2020 and EWC – Embedded Wireless Controller on AP

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What a year 2020 was, and still what success for Cisco Embedded Wireless Controller!

Despite COVID-19 transforming our lives, despite the challenges of working in a virtual environment for many of us, the C9100 EWC had an excellent year.

We had many thousands of EWC software downloads, and the C9100 EWC Product Booking increased quarter after quarter.  We had more than 200 customers controlling 13K+ Access Points!

Let’s try to summarize some learnings from 2020 customer’s experience with EWC:

Why are customers so interested in EWC, how does EWC address their needs?

The short story: The EWC gives them full Catalyst 9800 experience while running in a Container on the Access Point itself.

The long story:  For small and medium businesses, EWC is the sweet spot to manage the wireless networks. It is simple to use, secure by design, and above all ready to grow once the business grows, due to its flexible architecture. Once your network grows beyond 100 APs, it can be easily migrated to an appliance Controller or a cloud-based Controller. Therefore it offers investment protection.

The EWC is supported on all 11ax APs, and the scale varies from 50 APs/1000 clients (C9105AXI, C9115AX, C91117AX) to 100 APs/2000 clients (C9120AX, C9130AX).  With such a scale, a medium site or a branch deployment is given the advantage of an integrated Wireless Controller. So no other physical hardware is needed.

What EWC features/capabilities are most sought by the customers?

The short story: The EWC is an all-in-one Controller, combining the best-in-class Cisco RF innovations of an 11ax Access Point with the advanced enterprise features of a Cisco Controller.

The long story: Firstly, the most appealing 11ax AP Cisco RF innovations for the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ RF Signature Capture provides superior security for mission-critical deployments.

◉ 11ax APs offer Zero-Wait, Dual Filter DFS (Dynamic Frequency Selection). 9120/9130 APs will use both client-radio and Cisco RF ASIC to detect radar and to virtually eliminate DFS false positives.

◉ Cisco APs implement aWIPS feature (adaptive Wireless Intrusion Prevention System). This is a threat detection and mitigation mechanism using signature-based techniques, traffic analysis, and device/topology information. It is a full infrastructure-integrated solution.

In addition, a list of EWC enterprise-ready features that customers are looking for:

◉ AAA Override on WLANs (SSIDs) – the administrator can configure the wireless network for RADIUS authentication and apply VLAN/QOS/ACLs to individual clients based on AAA attributes from the server.

◉ Full support for the latest WPA3 Security Standard and for Advanced Wireless Intrusion Prevention (aWIPS).

◉ AVC (Application Visibility and Control) – the administrator can rate limit/drop/mark traffic based on client application.

◉ Controller Redundancy – any 11ax AP could play the Active/Standby role. EWC has the flexibility to designate the preferred Standby Controller AP.

◉ Identify Apple iOS devices and apply prioritization of business applications for such clients.

◉ mDNS Gateway – forwarding Bonjour traffic by re-transmitting the traffic between reflection enabled VLANs.

◉ Integration with Cisco Umbrella for blocking malicious URLs, malware, and phishing exploits.

◉ Programmable interfaces with NETCONF/Yang for automation, configuration, and monitoring.

◉ Software Maintenance Upgrades (SMUs) can be applied to either Controller software or AP software.

Ok, we see a lot of interesting features, but with so many features, a certain degree of complexity is expected. The next question coming to mind is:

How about the ease of use of the EWC?

As per reports from the field, the device can be configured in eight minutes in Day-0 configuration using WebUI (Smart Dashboard) and mywifi.cisco.com URL.

The WebUI has been reported as being ‘very straightforward’.

There is no need to reboot the AP after Day-0 configuration is applied.

A quote from a third-party assessment (Miercom) says everything: “The Cisco EWC solution is one of the easiest wireless products to deploy that we’ve encountered to date.”

The user configures a shortlist of items in Day-0 (either in WebUI or in CLI): username/password, AP Profile, WLAN, wireless profile policy, and the default policy tag.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
An alternative to WebUI is the mobile app from either Google Play or Apple App Store. The app allows the user to bring up the device in Day-0, or to view the fleet of APs, the top list of clients, or any other wireless statistics.

The EWC WebUI is very similar to the 9800 WebUI, so a potential transition to an appliance-based Controller is seamless. Please see the snapshot below:

Trying yourself the EWC WebUI is the most convincing argument to demonstrate its ease of use.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career

What else did customers like in 2020 regarding EWC?

A couple of EWC deliverables in release 17.4 were welcome by the customers:

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Preparation, Cisco Career
◉ DNA License-free availability for EWC reduces the total cost of ownership, but it will still give customers the advantage of having the Network Essentials stack by default.

◉ New Access Point 9105 models (9105AXI, 9105AXW) give customers value options for their network deployment through EWC (9105AXI)

Regarding the new 9105 Access Points, the 11ax feature-set is rich: 2×2 MU-MIMO with two spatial streams, uplink/downlink OFDMA, TWT, BSS coloring, 802.11ax beamforming, 20/40/80 MHz channels.

9105AXI has a 1×1.0 mGig uplink interface, while the wall-mountable version (9105AXW) has 3×1.0 mGig interfaces, a USB port, and a Passthru port.

Next IOS-XE releases coming out in 2021 are already planning new and interesting features rolled out for EWC, please stay tuned!

Bottom line


EWC proved last year to be a simple, flexible, and secure platform of choice for small/medium business customers. The 2020 EWC customer adoption rate was growing continuously.

Source: cisco.com

Monday 12 April 2021

What are you missing when you don’t enable global threat alerts?

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep

Network telemetry is a reservoir of data that, if tapped, can shed light on users’ behavioral patterns, weak spots in security, potentially malicious tools installed in enterprise environments, and even malware itself.

Global threat alerts (formerly Cognitive Threat Analytics known as CTA) is great at taking an enterprise’s network telemetry and running it through a pipeline of state-of-the-art machine learning and graph algorithms. After processing the traffic data in batch in a matter of hours, global threat alerts correlates all the user behaviors, assigns priorities, and groups detections intelligently, to give security analysts clarity into what the most important threats are in their network.

Smart alerts

All detections are presented in a context-rich manner, which gives users the ability to drill into the specific security events that support the threat detections grouped eventually into alerts. This is useful because just detecting potentially malicious traffic in your infrastructure isn’t enough; analysts need to build an understanding of each threat detection. This is where global threat alerts saves you time, investigating alerts and accelerating resolution.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 1: Extensive context helps security analysts understand why an alert was triggered and the reasons behind the conviction.

As depicted below in Figure 2, users can both change the severity levels of threats and rank high-priority asset groups from within the global threat alerts portal. This enables users to customize their settings to only alert them to the types of threats that their organizations are most concerned about, as well as to indicate which resources are most valuable. These settings allow the users to set proper context for threat alerts in their business environment.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 2: You change the priority of threats and asset groups from within the global threat alerts portal.

Global threat alerts are also presented in a more intuitive manner, with multiple threat detections grouped into one alert based on the following parameters:​

◉ Concurrent threats: Different threats that are occurring together.​

◉ Asset groups value: Group of threats occurring on endpoints that belong to asset groups with similar business value.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 3: Different threats that have been grouped together in one single alert, because they are all happening concurrently on the same assets.

Rich detection portfolio


Global threat alerts is continuously tracking and evolving hundreds of threat detections across various malware families, attack patterns, and tools used by malicious actors.

All these outcomes and detections are available for Encrypted Traffic Analytics telemetry (ETA) as well, which allows users to find threats in encrypted traffic without the need to decrypt that traffic. Moreover, because ETA telemetry contains more information than traditional NetFlow, the global threat alerts’ research team has also developed specific classifiers that are capable of finding additional threats in this data, such as with algorithms that are focused on detecting malicious patterns in the path and the query of a URL.

The global threat alerts’ research team is continuously engaged in dissecting new security threats and implementing the associated threat intelligence findings into hundreds of specialized classifiers. These classifiers are targeted at revealing campaigns that attackers are using on a global scale. Examples of these campaigns include the Maze ransomware and the njRAT remote access trojan. Numerous algorithms are also designed to capture generic malicious tactics like command-and-control traffic, command-injections, or lateral network movements.

Risk map of the internet


There are numerous algorithms focused on uncovering threat infrastructure in the network. These models are continuously discovering relationships between known malicious servers and new servers that have not yet been defined as malicious, but either share patterns or client bases with the known malicious servers. These models also constantly exchange newly identified threat intelligence with other Cisco security products and groups, such as Talos.

Cisco Prep, Cisco Tutorial and Material, Cisco Preparation, Cisco Career, Cisco Exam Prep
Figure 4: Analyzing common users of known malicious infrastructure and unclassified servers, global threat alerts can uncover new malicious servers.

This complex approach of threat detection consists of multiple layers of machine learning algorithms to provide high-fidelity detections that are always up-to-date and relevant, as researchers are updating the machine models constantly. Additionally, all this computation is done in the cloud and utilizes only network telemetry data to derive new findings. The findings and alerts are presented to users in Secure Network Analytics and Secure Endpoint.

Global threat alerts uses state-of-the-art algorithms to provide high-fidelity, unique threat detections for north-south network traffic, Smart Alerts to help prioritize and accelerate resolutions, and a risk map to provide greater context and understanding of how threats span across the network.