Tuesday, 27 April 2021

F5 & Cisco ACI Essentials – Dynamic pool sizing using the F5 ACI ServiceCenter

APIC EndPoints and EndPoint Groups

When dealing with the Cisco ACI environment you may have wondered about using an Application-Centric Design or a Network-Centric Design. Both are valid designs. Regardless of the strategy, the ultimate goal is to have an accessible and secure application/workload in the ACI environment. An application is comprised of several servers; each one performing a function for the application (web server, DB server, app server etc.). Each of these servers may be physical or virtual and are treated as endpoints on the ACI fabric. Endpoints are devices connected to the network directly or indirectly. They have an address , attributes and can be physical or virtual. Endpoint examples include servers, virtual machines, network-attached storage, or clients on the Internet. An EPG (EndPoint Group) is an object that contains a collection of endpoints, which can be added to an EPG either dynamically or statically. Take a look at the relationship between different objects on the APIC.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
ACI object relationship hierarchy

Relationship between Endpoints and Pool members


If an application is being served by web servers with IPs having address’s in the range 192.168.56.*, then these IP addresses will be present, as an endpoint in an endpoint group (EPG) on the APIC. From the perspective of BIG-IP these web servers are pool members on a particular pool.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Relationship between Endpoints and Pool members

The F5 ACI ServiceCenter is an application developed on the Cisco ACI App Center platform designed to run on the APIC controller. It has access to both APIC and BIG-IP and can correlate existing information on both to provide a mapping as follows.

BIG-IP                                                                APIC
VIP: Pool: Pool Members(s)               Tenant: Application Profile: End Point group

This gives an administrator a view of how the APIC workload is associated with the BIG-IP and what all applications and virtual IP’s are tied to a tenant. 

Dynamic EndPoint Attach and Detach


Lets think back to our application which is say being hosted on 100’s of servers, these servers could be added to an APIC EPG statically by a network admin or they could be added dynamically through a vCenter or openstack APIC integration. In either case there endpoints ALSO need to be added to the BIG-IP where the endpoints can be protected by malicious attacks and/or load-balanced. This can be a very tedious task for a APIC or a BIG-IP administrator.


Using the dynamic EndPoint attach and detach feature on the F5 ACI ServiceCenter this burden can be reduced. The application has the ability to adjust the pool members on the BIG-IP based on the server farm on the APIC. On APIC when an endpoint is attached, it is learned by the fabric and added to a particular tenant, application profile and EPG on the APIC. The F5 ACI ServiceCenter provides the capability to map an EPG on the APIC to a pool on the BIG-IP. The application relies on the attach/detach notifications from the APIC to add/delete the BIG-IP pool-members.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Mapping EPG to Pool members

There are different ways in which the dynamic mapping can be leveraged using the F5 ACI ServiceCenter based on the L4-L7 configuration. In all the scenarios described below the L4-L7 configuration is deployed on the BIG-IP using AS3 (flexible, low-overhead mechanism for managing application-specific configurations on a BIG-IP system).

Scenario 1: Declare L4-L7 configuration using F5 ServiceCenter

Scenario 2: L4-L7 configuration already exists on the BIG-IP

Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ServiceCenter

Scenario 4: Use the F5 ServiceCenter API’s to define the mapping along with the L4-L7 configuration

Let’s take a look at each one in detail:

Scenario 1: Declare L4-L7 configuration using F5 ServiceCenter


Let’s assume there is no existing configuration on the BIG-IP, a new application needs to be deployed which is front ended by a VIP/Pool/Pool members. The F5 ACI ServiceCenter provides a UI that can be used to deploy the L4-L7 configuration and create a mapping between Pool <-> EPG

Step 1: Define an application using one of the in-built templates

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Defining an Application using built-in templates

Step 2: Click on the Manage Endpoint mappings button to create a mapping

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides
Managing Endpoint mappings

Scenario 2: L4-L7 configuration already exists on the BIG-IP


If L4-L7 configuration using AS3 already exists on the BIG-IP, the F5 ACI ServiceCenter will detect all partitions and application that in compatible with AS3. Configuration for a particular partition/application on BIG-IP can then be updated to create a Pool <-> EPG mapping. However there is one condition that the pool can either have static or dynamic members so if the pool already has existing members those will have to be deleted before a dynamic mapping can be created. To maintain the dynamic mapping , any future changes to the L4-L7 configuration on the BIG-IP should be done via the ServiceCenter.

Scenario 3: Use dynamic mapping but do not declare the L4-L7 configuration using the F5 ServiceCenter


The F5 ACI ServiceCenter can be used just for the dynamic mapping and pool sizing and not for defining the L4-L7 configuration. For this method the entire AS3 declaration along with the mapping will be directly send to the BIG-IP using AS3.

Sample declaration (The members and constants section creates the mapping between Pool<->EPG)

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Since the declaration is AS3, the F5 ACI ServiceCenter will automatically detect a Pool <-> EPG mapping which can be viewable from the inventory tab.

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Scenario 4: Use the F5 ServiceCenter API’s to define the mapping along with the L4-L7 configuration


Finally if the UI is not appealing and automation all the way is the goal, then the F5 ServiceCenter has an API call where the mapping as well as the L4-L7 configuration which was done in Scenario 1 can be completely automated. Here the declaration is being passed to the F5 ACI ServiceCenter through the APIC controller and NOT directly to the BIG-IP.

URI:https://<apic_controller_ip>>/appcenter/F5Networks/F5ACIServiceCenter/updateas3data.json 

Body/declaration

Cisco ACI Essentials, Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Certification, Cisco Guides

Having knowledge on how AS3 works is essential since it is a declarative API and using it incorrectly can result in incorrect configuration. Either method mentioned above works, the decision on which method to use is influenced on the operational model that works the best in your environment.

Source: cisco.com

Sunday, 25 April 2021

Securing the air with Cisco’s wireless security solution

With the proliferation of IoT and BYOD devices, wireless security is top-of-the-mind for network administrators and customers. Globally, there will be nearly 628 million public Wi-Fi hotspots by 2023, which is almost four-fold increase from 2018. This will increase the attack surface and hence the vulnerability for the network. The total number of DDoS attacks is predicted to reach 15.4 million by 2023, more than double the number from 2018. Due to inherent open nature of wireless communications, wireless LANs are exposed to multitude of security threats, including DoS flood attacks.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
Number of DDoS attacks (Source: Cisco Annual Internet Report, 2018–2023)

Cisco Next Generation Advanced Wireless Intrusion Prevention System (aWIPS) is one of the solutions in Cisco’s multi-pronged approach to providing wireless security. aWIPS is a wireless intrusion threat detection and mitigation mechanism that secures the air. aWIPS along with currently offered Rogue management solution provides security against DoS attacks, management frame attacks, tool-based attacks and more. 

Solution Components


aWIPS and Rogue management solution comprises of Cisco access points, Wireless LAN controllers and Cisco DNA Center. This solution is supported on all 802.11ax/802.11ac wave2 Cisco access points and Cisco 9800 series controllers.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material

Access Points: Access points detect threats using signature-based techniques. Access points can operate in monitor, local, and flex-connect mode. In monitor mode, radios continuously scan all channels for any threats, but they don’t serve any clients. In local and flex-connect mode, access point radios serve clients and scan for threats on client serving channels. On non-serving channels they would do best-effort scanning for any possible threats.  With Cisco’s Catalyst 9130 and 9120 WiFi 6 access points, there is an additional custom RF ASIC radio that continuously monitors all channels for any threats, while the other radios serve the clients. With this dedicated radio, we significantly improve our threat detection capabilities.

Cisco 9800 series controllers: Cisco WLAN controllers configure the access points and receives alarms and rogue information received from access points. It sends the consolidated reports to Cisco DNA Center.

Cisco DNA Center: Cisco DNA Center provides simple workflows that allow users to customize aWIPS signatures and rogue rules. It constantly monitors, aggregates, corelates and classifies all the rogue events and alarms received from all the managed access. Using network intelligence as well as topology information, DNA Center accurately pinpoints the source of attack, and allow users to contain the attack before any actual damage or exposure occur.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material

Intuitive, Simple and Secure


Cisco aWIPS and Rogue management solution is intuitive and simple to configure, but has advanced signature-based techniques, network intelligence and analytics to detect threats. With Cisco aWIPS and Rogue management solution, the network is secure against all types of on-the-air wireless attacks.

Denial of Service:

Denial of service attacks aim to cause resource exhaustion and thus deny legitimate users access to the wireless service. Due to the nature of wireless communication, the DoS flood attacks are very prevalent in the network.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
DoS flood attacks snapshot (3-month period) from a wireless network

With aWIPS, we detect, report and provide location of following DoS attacks:

◉ Targeted towards access points: Access points have limited resources and DoS flood attacks like authentication flood, association flood, EAPOL-start flood, PS Poll Flood, probe request flood, re-association flood can overwhelm access point.

◉ Targeted towards infrastructure: DoS flood attacks like RTS flood, CTS flood or beacon flood causes RF spectrum congestion and thus block legitimate clients from accessing wireless network.

◉ Targeted towards clients: Attacks like de-authentication flood, disassociation flood, broadcast de-authentication flood, broadcast disassociation flood, EAPOL logoff flood, authentication failure attack, probe response flood, block ack flood can cause valid clients to disconnect or can prevent them from joining the network, thus disrupting wireless service.

◉ Targeted to exploit known vulnerabilities/bugs: Attacks using fuzzed beacon, fuzzed probe request, fuzzed probe response, malformed association request, malformed authentication are targeted to exploit known vulnerabilities/bugs in wireless devices, thus causing crash, leading to denial of service.

aWIPS detects Airdrop session, which can present security risks as these peer-to-peer connections are unauthorized in the corporate settings. As part of aWIPS solution, we also alert user of any invalid MAC OUI use in the network.

Impersonation and Intrusion

Rogue management provides protection against AP impersonation, Honeypot AP and Rogue-on-wire. Using auto-containment/manual containment, any rogue attacks can be thwarted before actual damage occurs.

Not one size fits all


Every network is different, and what is deemed as acceptable and expected behavior on one network need not always be acceptable for another. With Cisco DNA Center, we provide following configuration knobs to allow our customers to fine-tune aWIPS signature and Rogue rules based on their network needs:

1. Flexibility to select signatures.
2. Configurable thresholds for signatures.
3. Configurable threat levels

These configuration knobs allow one to configure aWIPS signatures to fit their network characteristics.

Users can add Rogue rules to customize Rogue detection and management. The rules allow users to configure threat levels and conditions like SSID, RSSI, encryption and rogue client count.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
aWIPS signature customization

Cisco DNA Center provides simple workflows that enable customers to customize aWIPS signatures and Rogue rules.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
Rogue rule customization

Attack Forensics


Sometimes there is an overwhelming need for evidence and post-analysis to get deeper understanding of the attacks in the network. With Cisco aWIPS you have an option to enable forensic capture per signature. When forensic capture knob is enabled for a signature, access points would capture raw packets during the attack timeframe and send it to DNA Center where the customers can view these packet captures. These packet captures can be used to analyze what is triggering the attack.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
Forensic Capture

Cisco DNA Center: The eye that sees them all


Using Cisco DNA Center, one can not only configure aWIPS and customize as per their needs, but can also view the alarms, along with location of threat, threat MAC details, all in single pane of glass. Gone are the days when the administrator had to go through each wireless LAN controller to get this level of detail. DNA Center aggregates, correlates and summarizes the attacks across the managed network on the unified security dashboard. In addition to current active alarms, DNA Center also stores historic data for users to view and analyze.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
Rogue/aWIPS alarm dashboard

Threat 360: The who/what/when/where?


Cisco DNA Center Threat 360 view provides detailed view on each of the alarms:

1. Context of attack: Information on attacker, victim and detecting entities.
2. Threat level: Severity of the attack
3. Location and Time of the attack.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Guides, Cisco Tutorial and Material
Threat 360

This kind of visualization of threats have gotten our customers excited about Cisco security solution package. Our customers love this unified dashboard with threat 360 view, and they are deploying DNA Center with Rogue package across multiple geographical locations.

Source: cisco.com

Thursday, 22 April 2021

The Need for Continuous and Dynamic Threat Modeling

The trend towards accelerated application development, and regular updates to an architecture through an agile methodology, reduces the efficacy and effectiveness of point-in-time threat modeling. This recognition led us to explore and strategize ways to continuously, and dynamically, threat model an application architecture during runtime.

Today, thanks to a robust DevOps environment, developers can deploy a complex architecture within a public cloud such as Amazon Web Services (AWS) or Google Cloud Platform without requiring support from a network or database administrator. A single developer can develop code, deploy an infrastructure through code into a public cloud, construct security groups through code, and deploy an application on the resulting environment all through a continuous integration/continuous delivery (CI/CD) pipeline. While this enables deployment velocity, it also eliminates multiple checks and balances. At Cisco, we recognized the risks introduced by such practices and decided to explore strategies to continuously evaluate how an architecture evolves in production runtime to guard against architecture drift.

Dynamic threat modeling must begin with a solid baseline threat model that is done in real-time. This can in turn be monitored for architecture drift. Our approach to obtain such a real-time view is to use dynamic techniques to allow security and ops teams to threat model live environments instead of diagraming on paper or whiteboards alone.

How Does Dynamic Threat Modeling Work?

Threat modeling is the practice of identifying data flows through systems and various constructs within an architecture that exhibit a security gap or vulnerabilities. A crucial element that enables the practice of threat modeling is generating the right kind of visual representation of a given architecture in an accurate manner. This approach can differ based on context and from one team to another.  At Cisco, we instead focused on elements and features that need to exist to allow a team to dynamically perform a threat modeling exercise. These elements include the ability:

◉ To transform an operational view of an architecture to a threat model

◉ To contextualize a requirement

◉ To monitor the architecture for drift based on a requirement

From Operational View to Threat Model

Numerous tools exist that can render an operational view of an architecture. However, an operational view of an architecture is not the same as a threat model. Instead, an operational view must undergo a transformation to create a threat model view of an architecture. For this to occur, the solution should at a minimum provide a way to filter and group queries within an architecture so that only relevant data is visually rendered.

As an example, consider a case where an AWS hosted public cloud offer consists of two types of S3 buckets (Figure 1). One type of S3 buckets is deployed for customers for them to access directly. Each customer gets their own unique S3 bucket to access. Other types of S3 buckets are deployed for organization-specific internal administrative purposes. Both types of S3 buckets are identified through their AWS tags (“Customer” and “Admin” respectively). A filter-based query applied to an architecture of this type can answer questions such as “Are there S3 buckets with Tag: ‘Customer’ or ‘Admin’ in this architecture?”

Figure 1. Operational Views with and Without Filtering or Grouping Applied

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Preparation

Even though grouping is like filtering, it differs because it allows an administrator to query an architecture with the question: “Are there S3 buckets with the Customer or Admin tag in this architecture? If so, group these assets by their tags and logically represent them by their tags” (Figure 2).

Figure 2. Operational View with Grouping Applied by Admin or Customer Tags

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Preparation

What Does it Mean to Contextualize a Requirement?


With dynamic threat modeling, contextualizing a requirement allows a team to prescribe a contextualized remediation plan for a specific area of the architecture so that it can be monitored for architecture drift. This event is the next step towards securing an architecture from specific threats at a more granular level once the appropriate base line security guardrails have been applied towards an environment.

To build on the example from above, industry standard best practices towards securing a S3 bucket prescribes configuring S3 buckets as non-public. As mentioned above, the first type of S3 bucket is offered to customers for them to access (for read or write). Furthermore, each customer gets their own unique S3 bucket. The second type of S3 bucket is used by the organization’s internal administrative purposes. Once the standard guardrails have been implemented towards the two types of S3 buckets, the next step is to determine the type of access authorization that should be applied towards the two types of S3 buckets based on the purposes they serve (Figure 3).

Figure 3:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Preparation

Ability to Monitor the Architecture for Drift Based on Requirements


As previously mentioned, the goal of dynamic threat modeling is to monitor the architecture that has been threat modeled in real-time for architecture drift. This should not be confused with the ability to monitor a network for vulnerabilities. To monitor for vulnerabilities, there are already numerous tools within the industry to help a DevSecOps team determine areas of risks. To monitor for architecture drift, a solution must be able to tie together a sequence of events to determine if the appropriate context exists for the events to be considered as drift. To continue our example from Figure 3, Figure 4 below outlines the areas within the S3 architecture that should be monitored for architecture drift once the contextualized requirement has been applied.

Figure 4. Monitoring Applied to Customer and Admin Buckets Grouped Based on Requirements

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Preparation

Challenges and What the Future Holds


By enabling dynamic threat modeling, DevSecOps can continuously monitor an environment in real-time for any architecture drift. However, the following challenges must be addressed by DevSecOps:

◉ Apply better conversion techniques to transform an operational view to a threat model

◉ Develop better strategies to codify human-based contextual requirements into actual rules

◉ Drive a consistent baseline security strategy that can be evaluated based on various architectures

Security is a journey that requires influencing and enabling teams to adopt and employ best practices and controls for their architectures. By continuing to enhance this strategy and addressing the challenges mentioned above, we anticipate wide adoption and acceptance of continuous and dynamic threat modeling of live environments to monitor for any architecture drift and proactively mitigate the risks in the fast-paced world of DevSecOps.

Figure 5 illustrates what we’ve accomplished at Cisco as we strive to raise the bar on security and the trust of our customers.

Figure 5. Cisco Security Automation for DevSecOps Features

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Career, Cisco Guides, Cisco Preparation

Source: cisco.com

Wednesday, 21 April 2021

Building Trust in Your Access Network

How do you know for sure that a router in your network has not been altered since you deployed it? Wouldn’t it be great if you could cryptographically challenge your router to provide its unique identity? In addition, what if the underlying OS could provide a secure mechanism to detect if the software had been tampered with during boot time and runtime?

Networking equipment manufacturers are seeing an increase in supply chain attacks, which means communication service providers (CSP) need tools that can detect the replacement of critical components such as CPU/NPU. Software security features are insufficient in detecting and protecting against these attacks if the underlying hardware has been compromised. To completely trust the device, CSPs need a chain of trust that is preserved in hardware manufacture, software development and installation, procurement, and live deployment within their network.

With 5G deployments gaining traction, routers are now increasingly deployed in distributed architectures (read as remote locations) and depended on as critical infrastructure. Cisco’s trustworthy platforms ensure customers can validate the authenticity of their devices in both hardware and software to help eliminate malicious access to the network and significantly improve the CSP’s security posture.

To understand how we do this, let’s go over the basic security building blocks included in the NCS 500 platforms (as well as others) that enable us to deliver the following aspects of trustworthy platforms:

◉ Hardware integrity

◉ Boot integrity

◉ Runtime integrity

◉ Operational visibility of your trustworthy network

Root of Trust in Hardware

Incorporating the latest software security features is immaterial unless the underlying hardware itself is trustworthy. To provide this strong foundation of Trust, the Cisco NCS 540 and NCS 560 routers incorporate a tamper-resistant Trust Anchor module. This acts to protect the entire Secure boot process from components to operating system loading and establishing a chain of trust.

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Career, Cisco Preparation, Cisco Certification

The hardware trust anchor module primarily provides the following set of features.

◉ Microloader needed for the secure boot process

◉ Secure Unique Device Identifier (SUDI) for device identification

◉ On-chip storage for encryption keys

◉ UEFI compliant DB for key management

◉ On-chip registers (PCRs) to record boot and runtime measurements

◉ On-chip DB to secure the hash of CPU for Chip Guard feature

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Career, Cisco Preparation, Cisco Certification

Measuring & Verifying Trust


Trust, unlike security, is tangible. It can be measured and verified by an entity external to the device. The NCS 500 series routers come with Boot Integrity Visibility and Chip Guard features to ensure customers can validate the trustworthiness of the device and that it hasn’t been tampered with during boot time or subjected to supply-chain attacks. The Trust Anchor module captures measurements recorded during the secure boot process and these measurements can later be retrieved to validate that the boot process hasn’t been tampered.

With increasing supply-chain attacks, components like CPUs are being replaced with compromised chips that contain Trojan programs. At boot-up, the NCS 500 series can counter these types of attacks because the Chip Guard feature utilizes stored Known Good Values (KGV) within the TAm to validate all components. If a KGV value does not match, then the hardware boot will fail, and an alert can be sent to the network monitoring tools.

Lastly, the Secure Unique Device Identifier (SUDI) that gets programmed inside the Trust Anchor module during the manufacturing process ensures that the router can be cryptographically challenged at any time during its operational lifetime to validate its identity. This way customers can ensure that they are still talking to the same router that was deployed in their network months or even years ago.

In short, the features of SUDI, Chip Guard, and Cisco Secure Boot enable customers to verify the integrity of the router over its entire lifetime.

Trust at Runtime


Moving to runtime protections and establishing trust in software, NCS 500 routers come with the latest IOS XR Operating System that includes a host of security features. Starting with SELinux policies that provide mandatory access controls for accessing files, it also supports the Linux Integrity Measurement Architecture (IMA). With these features, customers can now establish trust in software by querying the runtime measurements from a router at any point in time. The router continuously gathers file hashes for all the files being loaded and executed. These measurements can be queried by an external entity to compare against the expected Known Good Values published by Cisco. To ensure the authenticity of these remotely attested measurements, they are signed with the device’s unique SUDI private key.

With these foundational blocks of trust being established in hardware, both during boot time and runtime, we are now able to provide additional features like a trusted path routing that can help extend trust further into the network. The trust status of a device, the trusted routing path, and the ability to validate software updates as genuine per the manufactures specifications are valuable assets included in the Crosswork Trust Insights tool that can provide proof of the network’s trustworthiness.

Source: cisco.com

Tuesday, 20 April 2021

Cisco DNA Center smooths network operations

Cisco DNA, Cisco Tutorial and Material, Cisco Prep, Cisco Certification, Cisco Career, Cisco Study Material

As we plan for a safe return to Cisco offices around the world, we are experiencing a large increase in the types and numbers of devices connecting to our network. This means that our teams need to manage an increasingly complex ecosystem more efficiently than ever before.

Like many IT departments, we are scrambling to keep up with these new network demands. In fact, according to one recent study of various enterprises, 43 percent of surveyed IT and network professionals said they struggle to find time to work on strategic business initiatives, and 42 percent spend too much time troubleshooting the network.

Read More: 300-425: Designing Cisco Enterprise Wireless Networks (ENWLSD)

As a result, many IT teams lack the time needed both to grow their networks and take on new projects that could set their companies apart from the competition.

To help address these challenges, our Customer Zero team, a part of Cisco IT, deployed the Cisco DNA Center controller as part of a multi-site initiative to better automate and maintain our campus and branch networks.

Cisco DNA Center delivers centralized command and control

With the Cisco DNA Center, we can take charge of our network, optimize our network investments, and respond to changes and challenges faster and more intelligently than we could before.

Cisco DNA Center provides a real-time dashboard for managing and controlling our enterprise network. It also automates provisioning and change management, checks compliance against policies, and captures asset logs that can be analyzed for troubleshooting, problem resolution, and predictive maintenance.

Assuring optimal network performance

Cisco DNA Center’s Assurance capabilities allows us to quantify network availability and risk based on analytics. It accomplishes this by enabling every point on the network to become a sensor. Cisco DNA Center collects data from 17 different network sources – including NetFlow, SNMP, syslog, streaming telemetry, and more – so that we can view network issues from many different angles and contexts. It sends continuous streaming telemetry on application performance and user connectivity in real time, then uses artificial intelligence (AI) and machine learning to make sense of the data.

Cisco DNA Center’s clean, simple dashboards show overall network status and flag issues. In addition, guided remediation automates the process of issue resolution and performance enhancement, ensuring optimal network user experiences and less troubleshooting. It allows us to resolve network issues in minutes instead of hours — before they become problems. Cisco DNA Center even lets us go back in time to see the cause of a network issue, instead of trying to re-create the issue in a lab.

How Cisco DNA Assurance operates

Cisco DNA, Cisco Tutorial and Material, Cisco Prep, Cisco Certification, Cisco Career, Cisco Study Material

Making an impact for Customer Zero


By implementing emerging technologies in Cisco’s IT production environments, Customer Zero provides an IT operator’s perspective as Cisco develops integrated solutions, best practices, and accompanying value cases to drive accelerated adoption.

As part of our mission to use Cisco products in our own real-world environment, the Customer Zero team has deployed Cisco DNA Center as part of a multi-site (six buildings) Cisco Software Defined Access (SD-Access) fabric on our San Jose campus. The solution has already yielded encouraging pilot-test results in four areas of the product: Network Health Dashboard, Client Health Dashboard, Network Insights & Trends (AI-driven), and Wireless Sensor Dashboard. Let’s take a closer look at how the last of these, Cisco DNA Center’s Wireless Sensor capability, is helping us improve the process of making network changes.

Real-world use case: network changes with software upgrades


For any required network changes, such as software upgrades, Cisco DNA Center collects information and insights from wireless sensors. The results are then displayed on a single dashboard, allowing our teams to monitor and detect issues more easily.

Wireless sensors behave as wireless clients. They connect to our SSIDs and run network tests, much like an on-site engineer would do. They have the added intelligence of reporting their findings back to Cisco DNA Center, where the data from all sensors is compiled into one dashboard. Sensors run their tests automatically and periodically – after initial configuration, there is no need to touch the sensors again.

Cisco DNA Center’s Wireless Sensor capability has provided five key benefits for Customer Zero:

1. Reduced time to complete change requests. After changes occur in the network, we check our sensors to ensure they – and, ultimately, end users – have no problem connecting to the SSIDs. Consequently, we can close the change window sooner.

2. Improved ease of use and productivity for IT teams monitoring the network. Instead of having to perform checks in multiple locations, we can monitor the health of the network in a single place. This is true both when following up after network changes (change requests) and also for daily monitoring of the infrastructure’s health.

3. Reduced risk and improved confidence. Our engineers use the sensor dashboard to systematically check wireless client health. We gain confidence in the success of our change windows and can assertively close them without worrying about lingering issues.

4. Reduced costs. Because wireless sensors tell us about the real-time health of our network, we feel more confident about conducting changes during business hours. With the ability to perform upgrades in production during business hours, we expect to reduce costs associated with outsourcing vendors who charge higher rates for off-hours activities.

5. Increased adoption of NetDevOps (agile) capabilities. The ability to make changes in production while leveraging critical data about end-user experience is helping to change our team members’ mindsets. They’ve become more assertive about embracing NetDevOps continuous improvement / continuous upgrade changes – which is also contributing to improved skillsets.

Our team’s implementation of Cisco DNA Center confirmed the solution’s ability to save time and costs, reduce risk, improve ease of use and confidence, and build stronger skillsets.

Source: cisco.com

Sunday, 18 April 2021

Bring Your Broadband Network Gateways into the Cloud

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

With average fixed broadband speeds projected to peak up to 110+ Mbps and the number of devices connected to IP networks ballooning to 29+ billion (more than three times the global population by 2023), Internet growth remains unabated and could even be stronger as the ongoing pandemic makes the internet more critical than ever to our daily lives, defining a new normal for humanity – video conferences replaced physical meetings, virtual “happy hours” with coworkers and friends replaced get-togethers, and online classrooms have immersed children in new methods of learning.

Shouldering the weight of these new digital experiences, communication service providers are experiencing a significant increase in traffic as well as a change in traffic patterns while struggling with average revenue per user (ARPU) trending flat to down. They need to reimagine their network architectures to deliver wireline services in a more cost-efficient manner. With the average revenue per user (ARPU) flat or declining, network architectures must evolve to deliver cost-efficient wireline services.

Responsible for critical subscriber management functions and a key component of any wireline services’ architecture, the broadband network gateway (BNG) has historically been placed at centralized edge locations. Unfortunately, these locations don’t provide the best balance between the user plane and the control plane’s performance requirements. The user plane (also known as the forwarding plane) scale is tied to the bandwidth per subscriber, while the control plane scale depends on the number of subscriber sessions and services provided for end-users. In most situations what happens is that either the control plane or the user plane ends up being either over or underutilized.

For years, the limited number of services per end-user and moderate bandwidth per user allowed network designers to roll out BNG devices that supported both user plane and control plane on the same device because minimal optimization was required. But today, with the exponential growth in traffic, subscribers, and services fueled by consumers’ appetite for new digital experiences, the traditional BNG architecture facing some severe limitations.

Given the changing needs and requirements, it is no longer possible to optimize the user plane and control plane when hosted on the same device. And it’s not scalable, making it difficult to support bandwidth or customer growth, control costs, and manage complexity with more and more BNG deployments. It is time to entirely rethink the BNG architecture.

Cloud Native Broadband Network Gateway

To overcome these operational challenges and right-size the economics, Cisco has developed a cloud native BNG (cnBNG) with control and user plane separation (CUPS) – an important architectural shift to enable a more agile, scalable, and cost-efficient network.

This new architecture simplifies network operations and enables independent location, scaling, and life cycle management of the control plane and the user plane. With the CUPS architecture, the control plane can be placed in a centralized data center, scaled as needed, and it can manage multiple user plane instances. Cloud native control planes provide agility and speed up the introduction of new service introduction using advanced automation. Communication Service Providers (CSPs) can now roll out leaner user plane instances (without control plane related subscriber management functions) closer to end-users, guaranteeing latency, and avoiding the unnecessary and costly transport of bandwidth-hungry services over core networks, Thereby, they can place Content Distribution Network (CDN’s) deeper into the network, enabling peering offload at the edge of the network hence delivering a better end-user experience.

There are also other benefits. A cloud native infrastructure provides cost-effective redundancy models that prevent cnBNG outages, minimizing the impact on broadband users. And, a cloud-native control plane lets communication service providers adopt continuous integration of new features, without impacting the user plane which remains isolated from these changes. As a result, operations are eased, thanks to a centralized control plane with well-defined APIs to facilitate the insertion into OSS/BSS systems.

When compared to a conventional BNG architecture, Cisco cloud native BNG architecture brings significant benefits:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career
1. A clean slate Fixed Mobile Convergence (FMC) ready architecture as the control plane is built from the ground-up with cloud-native tenets, integrating the subscriber management infrastructure components across domains (wireless, wireline, and cable)

2. Multiple levels of redundancy both at the user plane and control plane level

3. Optimized user plane choices for different deployment models at pre-aggregation and aggregation layers for converged services

4. Investment protection as an existing physical BNG can be used as user planes for cnBNG

5. Granular subscriber visibility using streaming telemetry and mass-scale automation, thanks to extensive Yang models and KPIs streamed via telemetry, enabling real-time API interaction with back-end systems

6. A Pay-as-you-grow model allows customers to purchase the user planes network capacity, as needed

Analysis has shown that these benefits translate into up to 55% Total Cost of Ownership (TCO) savings.

An Architecture Aligned to Standards

This past June, the Broadband Forum published a technical report on Control and User Plane Separation for a disaggregated BNG – the TR-459 – that notably defines the interfaces and the requirements for both control and user planes. Three CUPS interfaces are defined – the State Control Interface (SCi), the Control Packet Redirect Interface (CPRi), and the Management Interface (Mi).

With convergence in mind, the Broadband Forum has selected the Packet Forwarding Control Protocol (PFCP) defined by 3GPP for CUPS as the SCi protocol. It is a well-established protocol especially for subscriber management. Whereas the TR-459 is not yet fully mature, Cisco’s current cnBNG implementation is already aligned to it.

On the Road to Full Convergence

Historically, wireline, wireless, and cable subscriber management solutions have been deployed as siloed, centralized monolithic systems. Now, a common, cloud-native control plane can work with wireline, wireless, and cable access user planes paving the way to a universal, 5G core, converged subscriber management solution capable of delivering hybrid services. And Network Functions (NF’s) that are part of the common cloud-native control plane, not only share the subscriber management infrastructure, they also provide a consistent interface for policy management, automation, and service assurance systems.

Read More: 500-450: Implementing and Supporting Cisco Unified Contact Center Enterprise (UCCEIS)

Moving forward, CSPs can envision a complete convergence of policy layer and other north-bound systems, all the way up to the communication service provider’s IT systems.

With a converged model in place, customers can consume services and applications from the access technology of their choice, with a consistent experience. And communication service providers can pivot to a model with unified support services, and monitoring/activation systems, while creating sticky service bundles, as more end-user devices are tied to a single service, increasing  customer retention.

Cisco is uniquely positioned to help customers embrace this new architecture with a strong end-to-end ecosystem of converged subscriber management across mobile, wireline, and cable, in addition to, a fully integrated telco cloud stack across compute, storage, software defined fabric, and cloud automation.

Source: cisco.com

Saturday, 17 April 2021

100-490 RSTECH Free Exam Questions & Answers | CCT Routing and Switching Exam Syllabus


Cisco RSTECH Exam Description:

The Supporting Cisco Routing and Switching Network Devices v3.0 (RSTECH 100-490) is a 90-minute, 60-70 question exam associated with Cisco Certified Technician Routing and Switching certification. The course, Supporting Cisco Routing and Switching Network Devices v3.0, helps candidates prepare for this exam.

Cisco 100-490 Exam Overview:

Your workforce is ready – but is your workplace?

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material

We’re heading back to the office!!

It won’t happen overnight – but the signs are increasingly positive that we are on our way back. Some companies, like Cisco and Google, have begun encouraging a phased return to the office, once the situation permits. Personally, I can’t wait to be in the same physical space as my colleagues, as well as meeting our customers face-to-face.

Like most of you, I desire the flexibility to choose where I work. Based on our recent global workforce survey, only 9% expect to be in the office 100% of the time. That means IT will need to deliver a consistently secure and engaging experience across both in-office and remote work environments.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 1. The future of work is hybrid

Networking teams need to prepare for the hybrid workplace.


Some people tend to be more prepared than others. Personally, I like to err on the side of being over-prepared. Most network professionals I know are of a similar mindset. So, what does that mean when it comes to the return to the office?

◉ Employee concerns: From our global workplace survey we learned that 95% of workers are uncomfortable about that return due to fears of contracting COVID-19. Leading the concerns at 64% is not wanting to touch shared office devices, closely followed by concerns over riding in a crowded elevator (62%), and sharing a desk (61%).

◉ Business concerns: While businesses need to provide safe and secure work environments, requiring new efforts and solutions, they must also try to mitigate costs and capture savings. One primary approach is by using office space more efficiently. Our survey results show that 53% are already looking at options to reduce office space, while 96% indicate they need intelligent workplace technology to improve work environments

Where to start your return to the office


So, what’s on your mind as we head back to the office? According to IDC, the biggest permanent networking changes you are making resulting from COVID are the integration of networking and security management (32%); improved support for remote workers (30%); and improved network automation, visibility, and analytics (28%). So how can you address these priorities as we head back to the office?


If your car had been sitting unused in the driveway for a year, the first thing you’d do is get it serviced. Likewise, your campus and branch networks need to be put through their paces. Utilization is minimal right now, so it is a great time to see what improvements can be made.

◉ Reimagine Connections: Digital business begins with connectivity, so you can’t take it for granted. You can start by making sure your wired and wireless network can support an imminent return to work. With hybrid work, everyone will have video to the desktop. Will your network performance deliver the experience users love? And make sure it is set up to enhance your employees’ safety and work experience with social density and proximity monitoring, workspace efficiency, and smart building IoT requirements.

◉ Reinforce Security: This is the time to automate security policy management, micro-segmentation, and zero-trust access so that any device or user is automatically authenticated and authorized access only to those resources it’s approved for.

◉ Redefine IT Experience: Make it easy on your team and your business. With automation and AI-enabled analytics technologies, AIOps is now a reality for network operations too. All the tools are available to achieve pre-emptive troubleshooting and remediation from “user/device-to-application” – wherever either is located.

Cisco Prep, Cisco Learning, Cisco Guides, Cisco Career, Cisco Tutorial and Material
Figure 2. Choose your access networking journey

Here are some valuable tips from our Cisco networking team that are making the necessary preparations for our own Cisco campuses.


According to our survey, 58% of workers will continue to work from home at least 8 days a month. That means that you need to continue investing efforts in optimizing the experience for those workers. Many of them are still complaining that their work experience is not optimal. According to IDC, 55% of work from home users complain of multiple problems a week, while 50% complain of problems with audio on video conferences.

◉ Work from Home: Deliver plug-and-play provisioning and policy automation that allows your remote employees to easily and securely connect to the corporate network without setting up a VPN.

◉ Home Office: For those that want to turn their home network into a “branch of one” you can create a zero-trust fabric with end-to-end segmentation and always-on network connectivity that provides an enhanced multicloud application experience.


The evolution to a hybrid workforce, together with the accelerated move to the cloud and edge applications, has led to the perfect storm that demands a new approach for IT to deliver a secure user experience regardless of where users and applications are located. This new approach is being offered by a combination of SD-WAN and cloud security technologies that have been termed Secure Access Service Edge or SASE. It’s estimated that 40% of enterprises will have explicit strategies to adopt SASE by 2024

◉ SD-WAN: Out of the multiple ways to get started with adopting a full SASE architecture – I would propose that SD-WAN is a wise choice. It offers a secure, mature and efficient way to access both SaaS and IaaS environments, with multiple deployment and security options.

◉ SASE: Evolving to a full SASE architecture combines networking and security functions in the cloud to deliver secure access to applications, anywhere users work. Combining with SD-WAN, this includes security service, including firewall as a service, secure web gateway (SWG)s, cloud access security broker (CASB), and zero trust network access (ZTNA).

Source: cisco.com    

Friday, 16 April 2021

Comparing Lower Layer Splits for Open Fronthaul Deployments

Introduction

The transition to open RAN (Radio Access Network) based on interoperable lower layer splits is gaining significant momentum across the mobile industry. However, where best to split the open RAN is a complex compromise between radio unit (RU) simplification, support of advanced co-ordinated multipoint RF capabilities, consequential requirements on the fronthaul transport, including limitations on transport delay budgets as well as bandwidth expansion. To help in comparing alternative options, different splits have been assigned numbers with higher numbers representing splits “lower down” in the protocol stack, meaning less functionality being deployed “below” the split in the RU. Lower layer splits occur below the medium access control (MAC) layer in the protocol stack, with options including Split 6 – between the MAC and physical (PHY) layers, Split 7 – within the physical layer, and Split 8 – between the physical layer and the RF functionality.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 1: Different Lower Layer Splits in the RAN Protocol Stack

This paper compares the two alternatives for realizing the lower layer split, the network functional application platform interface (nFAPI) Split 6 as defined by the Small Cell Forum (SCF) and the Split 7-2x as defined by the O-RAN Alliance.

Small Cell Splits


The Small Cell Forum took the initial lead in defining a multivendor lower layer split, taking its FAPI platform application programming interface (API) that had been used as an informative split of functionality between small cell silicon providers and the small cell RAN protocol stack providers, and enabling this to be “networked” over an IP transport. This “networked” FAPI, or nFAPI, enables the Physical Network Function (PNF) implementing the small cell RF and physical layer to be remotely located from the Virtual Network Function (VNF) implementing the small cell MAC layer and upper layer RAN protocols. First published by the SCF in 2016, the specification of the MAC/PHY split has since been labelled as “Split 6” by 3GPP TR38.801 that studied 5G’s New Radio access technology and architectures.

The initial SCF nFAPI program delivered important capabilities that enabled small cells to be virtualized, compared with the conventional macro-approach that at the time advocated using the Common Public Radio Interface (CPRI) defined split. CPRI had earlier specified an interface between a Radio Equipment Control (REC) element implementing the RAN baseband functions and a Radio Equipment (RE) element implementing the RF functions, to enable the RE to be located at the top of a cell tower and the REC to be located at the base of the cell tower. This interface was subsequently repurposed to support relocation of the REC to a centralized location that could serve multiple cell towers via a fronthaul transport network.

Importantly, when comparing the transport bandwidth requirements for the fronthaul interface, nFAPI/Split 6 does not significantly expand the bandwidth required compared to more conventional small cell backhaul deployments. Moreover, just like the backhaul traffic, the nFAPI transport bandwidth is able to vary according to served traffic, enabling statistical multiplexing to be used over the fronthaul IP network. This can be contrasted with the alternative CPRI split, also referred to as “Split 8” in TR38.801, that requires bandwidth expansion up to 30-fold and a constant bit rate connection, even if there is no traffic being served in a cell.

HARQ Latency Constraints


Whereas nFAPI/Split 6 offers significant benefits over CPRI/Split 8 in terms of bandwidth expansion, both splits are below the hybrid automatic repeat request (HARQ) functionality in the MAC layer that is responsible for constraining the transport delay budget for LTE fronthaul solutions. Both LTE-based Split 6 and Split 8 have a common delay constraint equivalent to 3 milliseconds between when up-link data is received at the radio to the time when the corresponding down-link ACK/NAK needs to be ready to be transmitted at the radio. These 3 milliseconds need to be allocated to HARQ processing and transport, with a common assumption being that 2.5 milliseconds are allocated to processing, leaving 0.5 milliseconds allocated to round trip transport. This results in the oft-quoted delay requirement of 0.25 milliseconds for one way transport delay budget between the radio and the element implementing the MAC layer’s up-link HARQ functionality.

The Small Cell Forum acknowledges such limitations when using its nFAPI/Split 6. Because the 0.25 milliseconds round trip transport budget may severely constrain nFAPI deployments, SCF defines the use of HARQ interleaving that uses standardized signaling to defer HARQ buffer emptying, enabling higher latency fronthaul links to be accommodated. Although HARQ interleaving buys additional transport delay budget, the operation has a severe impact on single UE throughput; as soon as the delay budget exceeds the constraint described above, the per UE maximum throughput is immediately decreased by 50%, with further decreases as delays in the transport network increase.

Importantly, 5G New Radio does not implement the same synchronous up-link HARQ procedures and therefore does not suffer the same transport delay constraints. Instead, the limiting factor constraining the transport budget in 5G fronthaul systems is the operation of the windowing during the random access procedure. Depending on the operation of other vendor specific control loops, e.g., associated with channel estimation, this may enable increased fronthaul delay budgets to be used in 5G deployments.

O-RAN Alliance


The O-RAN Alliance published its “7-2x” Split 7 specification in February 2019. All Split 7 alternatives offer significant benefits over the legacy CPRI/Split 8, avoiding Split 8 requirements to scale fronthaul bandwidth on a per antenna basis, resulting in significant lower fronthaul transport bandwidth requirements, as well introducing transport bandwidth requirements that vary with served traffic in the cell. Moreover, when compared to Split 6, the O-RAN lower layer Split 7-2x supports all advanced RF combining techniques, including the higher order multiple-input, multiple-output (MIMO) capability that is viewed as a key enabling technology for 5G deployments, as shown in Table 1, that can be used to contrast Split 6 “MAC/PHY” with Split 7 “Split PHY” based architectures.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 1: Comparing Advanced RF Combining Capabilities of Lower Layer Splits

However, instead of supporting individual transport channels over the nFAPI interface,  Split 7-2x defines the transport of frequency domain IQ defined spatial streams or MIMO layers across the lower layer fronthaul interface. The use of frequency domain IQ symbols can lead to a significant increase in fronthaul bandwidth when compared to the original transport channels. Figure 2 illustrates the bandwidth expansion due to Split 7-2 occurring “below” the modulation function, where the original 4 bits to be transmitted are expanded to over 18 bits after 16-QAM modulation, even when using a block floating point compression scheme.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career

Figure 2: Bandwidth Expansion with Block Floating Point Compressed Split 7-2x

The bandwidth expansion is a function of the modulation scheme, with higher expansion required for lower order modulation, as shown in Table 2.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 2: Bandwidth Expansion for Split 7-2x with Block Floating Point Compression compared to Split 7-3

Such a bandwidth expansion was one of the reasons that proponents of the so called Split 7-3 advocated a split that occurred “above” the modulation/demodulation function. In order to address such issues, and the possible fragmentation of different Split 7 solutions, the O-RAN Alliance lower layer split includes the definition of a technique termed modulation compression. The operation of modulation compression of a 16-QAM modulated waveform is illustrated in Figure 3. The conventional Split 7-2 modulated constellation diagram is shifted to enable the modulation points to lie on a grid that then allows the I and Q components to be represented in binary instead of floating point numbers. Additional scaling information is required to be signalled across the fronthaul interface to be able to recover the original modulated constellation points in the RU, but this only needs to be sent once per data section.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Figure 3: User Plane Bandwidth Reduction Using Modulation Compression with Split 7-2x

Because modulation compression requires the in-phase and quadrature points to be perfectly aligned with the constellation grid it can only be used in the downlink.  However, when used, it decreases the bandwidth expansion ratio of Split 7-2x, where the expansion compared to Split 7-3 is now only due to the additional scaling and constellation shift information. This information is encoded as 4 octets and sent every data section, meaning the bandwidth expansion ratio will vary according to how many Physical Resource Blocks (PRBs) are included in each data section. This value can range from a single PRB up to 255 PRBs, with Table 3 showing the corresponding Split 7-2x bandwidth expansion ratio over Split 7-3 is effectively unity when operating using large data sections.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 3:  Bandwidth Expansion for Split 7-2x with Modulation Compression compared to Split 7-3

Note, even though modulation compression is only applicable to the downlink (DL), the shift of new frequency allocations to Time Division Duplex (TDD) enables a balancing of effective fronthaul throughput between uplink (UL) and downlink. For example, in LTE, 4 of the 7 possible TDD configurations have more slots allocated to downlink traffic, compared to 2 possible configuration that have more slots allocated in the uplink. Using a typical 12-to-6 DL/UL configuration, with 256-QAM and 10 PRBs per data section, the overall balance of bitrates for modulation compression in the downlink and block floating point compression in the uplink will be (1.03 x 12) to (2.33 x 6), or 12.40:13.98, i.e., resulting in a relatively balanced link as it relates to overall bandwidth.

A more comprehensive analysis by the O-RAN Alliance has examined control and user-plane scaling requirements for Split 7-2x with modulation compression and compared the figures with those for Split 7-3. When taking into account other overheads, this analysis indicated that the difference in downlink bandwidth between Split 7-3 and Split 7-2x with Modulation Compression was estimated to be around 7%. Using such analysis, it is evident why the O-RAN Alliance chose not to define a Split 7-3, instead advocating a converged approach based on Split 7-2x that can be used to address a variety of lower layer split deployment scenarios.

Comparing Split 7-2x and nFAPI


Material from the SCF clearly demonstrates that, in contrast to Split 7, their nFAPI/Split 6 approach is challenged in supporting massive MIMO functionality that is viewed as a key enabling technology for 5G deployments. However, massive MIMO is more applicable to outdoor macro-cellular coverage, where it can be used to handle high mobility and suppress cell-edge interference use cases. Hence, there may be a subset of 5G deployments where massive MIMO support is not required, so let’s compare the other attributes.

With both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split occurring below the HARQ processing in the MAC layer, both are constrained by exactly the same delay requirements as it relates to LTE HARQ processing and fronthaul transport budgets. Both O-RAN’s Split 7-2x and SCF’s nFAPI lower layer split permit the fronthaul traffic load to match the served cell traffic, enabling statistical multiplexing of traffic to be used within the fronthaul network. Both O-RAN’s Split 7-2x and SCF’s nFAPI/Split 6 support transport using a packet transport network between the Radio Unit and the Distributed Unit.

The managed object for the SCF’s Physical Network Function includes the ability for a single Physical Network Function to support multiple PNF Services. A PNF service can correspond to a cell, meaning that a PNF can be shared between multiple operators, whereby the PNF operator is responsible for provisioning the individual cells. This provides a foundation for implementing Neutral Host. More recently, the O-RAN Alliance’s Fronthaul Working Group has approved a work item to enhance the O-RAN lower layer split to support a “shared O-RAN Radio Unit” that can be parented to DUs from different operators, thus facilitating multi-operator deployment.

Both SCF and O-RAN Split 7-2x solutions have been influenced by the Distributed Antenna System (DAS) architectures that are the primary solution for bringing the RAN to indoor locations. The SCF leveraged the approach to DAS management when defining its approach to shared PNF operation. In contrast, O-RAN’s Split 7-2x has standardized enhanced “shared cell” functionality where multiple RUs are used in creating a single cell. This effectively uses the eCPRI based fronthaul to replicate functionality normally associated with digital DAS deployments.

Comparing fronthaul bandwidth requirements, it’s evident that  the 30-fold bandwidth expansion of CPRI was one of the main reasons for SCF to embark on its nFAPI specification program. However, the above analysis highlights how O-RAN has delivered important capabilities in its Split 7-2x to limit the necessary bandwidth expansion and avoid fragmentation of the lower layer split market between alternative split PHY approaches. Hence, the final aspect when comparing these alternatives is how much the bandwidth is expanded when going from Split 6 to Split 7-2x. Figure 1 illustrates that the bandwidth expansion between Split 6 and Split 7-3 is due to the operation of channel coding. With O-RAN having already estimated that Split 7-3 offers a 7% bandwidth savings compared to Split 7-2x with Modulation Compression, we can use the channel coding rate to estimate the bandwidth expansion between Split 6 and Split 7-2x. Table 4 uses typical LTE coding rates for 64QAM modulation to calculate the bandwidth expansion due to channel coding. This is combined with the additional 7% expansion due to Modulation Compression to estimate the differences in required bandwidth. This table shows that the difference in bandwidth between nFAPI/Split 6 and Split 7-2x is a function of channel coding rate and can be as high as 93% for 64QAM with 1/2 rate code, and as low as 16% for 64 QAM with 11/12 rate code.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Tutorial and Material, Cisco Career
Table 4: Example LTE 64QAM Channel Coding Bandwidth Expansion

Whereas the above analysis indicates that the cost of implementing the Channel Coding above the RU in Split 7-2x is a nominal increase in bandwidth, the benefit to such an approach is the significant simplification of the RU by removing the need to perform channel decoding. Critically, the channel decoder requires highly complex arithmetic and can become the bottleneck in physical layer processing. Often, this results in the use of dedicated hardware accelerators that can add significant complexity and cost to the nFAPI/Split 6 Radio Unit. In contrast, O-RAN’s split 7-2x allows the decoding functionality to be centralized, where it is expected that it can benefit from increased utilization and associated efficiencies, while simplifying the design of the O-RAN Radio Unit.

Source: cisco.com