Sunday, 28 August 2022

New Learning Labs for NSO Service Development

Getting started with network automation can be tough. It is worth the effort though, when a product like Cisco Network Services Orchestrator (NSO) can to turn your network services into a powerful orchestration engine. Over the past year, we have released a series of learning labs that cover the foundational skills needed to develop with NSO:

◉ Learn NSO the Easy Way

◉ Yang for NSO

◉ XML for NSO

Now we are proud to announce the final piece of the puzzle. We’re bringing it all together with the new service development labs for NSO. If this is your first time hearing about Cisco NSO and service development, let’s review some of the context.

Why change is the only constant

Network programmability has been enhancing our networking builds, changes, and deployments for several years now. For the most part, this was inspired by Software Defined Networks – i.e., networks based on scripting methods, using standard programming languages to control and monitor your network device infrastructure.

Software-defined networking principles can deliver abstractions of existing network infrastructure. This enables faster service development and deployment. Standards such as NETCONF and YANG are currently the driving force behind these abstractions, and are enabling a significant improvement in network management. Scripting can take out a lot of laborious and repetitive tasks. However, it may still have shortfalls, as it can focus on single devices, one vendor, or one platform.

Service orchestration simplifies network operations

Service orchestration simplifies network operations and management of network services. Instead of focusing on a particular device and system configuration that builds a network service, only the important inputs are collected. The rest of the steps and processes for delivery are automated. The actual details, such as vendor-specific configurations on network devices and the correct ordering of steps, are abstracted from the user of the service. This results in consistent configurations, prevention of errors and outages, and overall cost reduction of managing a network.

Remove the complexity

With NSO services, service application maps input parameters to create, modify, and delete a service instance into the resulting native commands to devices in the network. The input parameters are given from a northbound system such as a self-service portal via an API (Application Programming Interface). This calls to NSO or a network engineer using any of the NSO User Interfaces such as the NSO CLI.

Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Prep, Cisco NSO, Cisco

NSO Service Development Module


In this new NSO learning lab you will learn how NSO services simplify network operations, how they work, and how to develop a template-based service. You will also use Python for scripting and service development, and to develop nano services. The module is broken into three sections which will guide you through use cases of NSO Service Developments.

◉ Introduction to NSO Service Development – How NSO services simplify network operations, how they work, and how to develop a template-based service

◉ Python Scripts and NSO Service Development – Python Scripts and NSO Service Development

◉ NSO Nano Service Development – How to develop nano services in NSO

Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Prep, Cisco Preparation, Cisco Prep, Cisco NSO, Cisco

Try it yourself now


You can find the new NSO Service deployment module in the NSO Basics for Network Operations Learning Track. All these new learning labs can be run and tested in the NSO DevNet reservation sandbox.

One of the things I embrace as an engineer is that change will happen. It might happen overnight, or over an extended period of time. But, it will happen. The only constant in the networking and software industry is ‘change.’ Let’s embrace this!

Source: cisco.com

Friday, 26 August 2022

Service Chaining VNFs with Cloud-Native Containers Using Cisco Kubernetes

Cisco Exam, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco News

To support edge use cases such as distributed IoT ecosystems and data-intensive applications, IT needs to deploy processing closer to where data is generated instead of backhauling data to a cloud or to the campus data center. A hybrid workforce and cloud-native applications are also pushing applications from centralized data centers to the edges of the enterprise. These new generations of application workloads are being distributed across containers and across multiple clouds.

Network Functions Virtualization (NFV) focuses on decoupling individual services—such as Routing, Security, and WAN Acceleration—from the underlying hardware platform. Enabling these Network Functions to run inside virtual machines increases deployment flexibility in the network. NFV enables automation and rapid service deployment of networking functions through service-chaining, providing significant reductions in network OpEx. The capabilities described in this post extend service-chaining of Virtual Network Functions in Cisco Enterprise Network Function Virtualization Infrastructure (NFVIS) to cloud-native applications and containers.

Cisco NFVIS provides software interfaces through built-in Local Portal, Cisco vManage, REST, Netconf APIs, and CLIs. You can learn more about NFVIS at the following resources:

Virtual Network Functions lifecycle management

Secure Tunnel and Sharing of IP with VNFs

Route-Distribution through BGP NFVIS system enables learning routes announced from the remote BGP neighbor and applying the routes to the NFVIS system; as well as announcing or withdrawing NFVIS local routes from the remote BGP neighbor.

Security is embedded from installation through all software layers such as credential management, integrity and tamper protection, session management, and secure device access.

Clustering combines nodes into a single cluster definition.

◉ Third-party VNFs are supported through the Cisco VNF Certification Program.

Cisco Exam, Cisco Tutorial and Material, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Cisco Preparation, Cisco News
Figure 1: Capabilities of Cisco NFVIS

Virtualizing network functions sets the stage for managing container-based applications using Kubernetes (k8s). Cisco NFVIS enables service chaining for cloud-native containerized applications for edge-compute deployments to provide secure communication from data center to cloud to edge.

Integrate Cloud-Native Applications with Cisco Kubernetes


Cisco’s goal is to make it easy for both NetOps and DevOps to work together using the same dashboard to perform the entire process of registering, deploying, updating, monitoring VMs, and provision service chains with the easy-to-use Cisco Enterprise NFVIS Portal or Cisco vManage for SD-WAN. NetOps persona can perform each step of the VNF lifecycle management to deploy VNF-based service chains.

Cisco NFVIS now includes Cisco Kubernetes to provide centralized orchestration and management of containers. Cisco Kubernetes is available to download through Cisco’s NFVIS Software site. The current release supports the deployment of Cisco Kubernetes through NFVIS Local Portal and NFVIS APIs using existing NFVIS Lifecycle Management Workflows.

Cisco Kubernetes has a built-in Kubernetes Dashboard, enabling NetOps and DevOps Admins to use standard Kubernetes workflows to deploy and manage networking and application VMs. NetOps Admins acquire access tokens in NFVIS via the built-in GUI Local Portal or NFVIS CLI to access a Kubernetes Dashboard running inside Cisco Kubernetes. NetOps personas can execute their role in establishing VNFs and then hand off administration tokens to DevOps personas to access the Kubernetes Dashboard within Cisco Kubernetes. DevOps uses the dashboard to instantiate and manage their application containers. VNFs can be service chained with applications inside Cisco Kubernetes via an ingress controller that is deployed as part of a Kubernetes cluster to provide load balancing and ingress controls.

Figure 2: Kubernetes Dashboard inside Cisco Kubernetes

Cisco Kubernetes supports two deployment topologies:

◉ Single node is enabled in the current NFVIS 4.9.1 release.
◉ In future releases, multi-node topologies will enable capabilities such as high availability..

Figure 3: Cisco NFVIS Application Hosting Workflow

Collaborative Tools to Simplify Cloud Native Container Applications


Ops team collaboration is made possible by Cisco Enterprise NFVIS and Cisco Kubernetes to power tomorrow’s applications across clouds and edge use cases. Deploying service-chained VNFs has enabled NetOps to simplify support for distributed offices, devices, and applications. Now Cisco Kubernetes in Cisco Enterprise NFVIS provides DevOps with a familiar set of k8s workflows to deploy containerized applications from on-premises to cloud to edge, taking full advantage of the service-chained VNFs managed by NetOps.

Source: cisco.com

Thursday, 25 August 2022

Rise of the Open NOS

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco

Open networking Innovations are largely driven by an industry need to protect network platform investments, maximize supply chain diversification, reduce operating costs, and build a homogenous operational and management framework that can be consistently applied across platforms running standardized software. By virtue of its adoption by cloud scale operators and its most recent inclusion in the Linux Foundation, SONiC has gained tremendous momentum across different market segments. This blog outlines key factors relevant to SONiC adoption, its evolution in the open network operating system (NOS) ecosystem, and Cisco’s value proposition with the SONiC platform validation and support.

Why use an open NOS?

Disaggregation enables decoupling hardware and software, giving customers the ability to fully exercise plug-and-play. An open-source NOS like SONiC can provide a consistent software interface across different hardware platforms, allowing for supply chain diversity and avoiding vendor lock in, further leveraged by in-house custom automation frameworks that don’t have to be modified on a per-vendor basis. A DevOps-centric model can accelerate feature development and critical bug fixes, which in turn reduces dependency on vendor software release cycles. The open-source ecosystem can provide the necessary support and thought leadership to enable snowflake use cases prevalent in many network deployments. The freedom to choose can protect investment across both hardware and software, thus leading to significant cost savings that further reduce total cost of ownership (TCO), operating expenditures (OpEx), and capital expenditures (CapEx).

What is SONiC?

SONiC (Software for Open Networking in the Cloud) was created by Microsoft in 2016 to power their Azure cloud infrastructure connectivity. SONiC is Debian based and has a microservice based containerized architecture where all major applications are hosted within independent Docker containers. In order to abstract the underlying hardware and ASIC, SONiC is built on SAI (Switch Abstraction Interface)which is a standardized vendor neutral hardware abstraction API. The NOS provides north bound interfaces (NBIs) to manage the device and these NBIs are based on gNMI, ReST, SNMP, CLI, and Openconfig Yang models so it’s easily integrated with automation frameworks.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Figure 1. A conceptual overview of SONiC
 

Why SONiC?


With so many open-source options out there, why consider SONiC? This NOS is gaining strong community leverage with growing industry traction through its adoption by prominent players spanning different market segments such as enterprise, hyperscale data center, and service providers. Open-source contributions have honed SONiC for focused use cases, enriching feature delivery while holistically enabling different architectures. Below are a few factors that emphasize open NOS benefits as applicable to SONiC:

Open Source:

◉ Vendor independence – SONiC can run on any compatible vendor hardware
◉ Feature velocity – Custom feature additions/modifications and self-driven bug fixes
◉ Community support – Upstream code contributions benefit all SONiC consumers
◉ Cost savings – Reduced TCO, OpEx, and CapEx

Disaggregation:

◉ Modular components – Multiple independent containerized components for increased resiliency and easier plug-and-play
◉ Decoupling software functions – Individual components can be customized based on use case

Uniformity:

◉ Abstraction – SAI abstraction layer to normalize underlying hardware intricacies
◉ Portability – Feature portability as the SAI normalizes hardware complexity

DevOps:

◉ Automation – Unified orchestration/monitoring for compute and common NOS across platforms
◉ Programmability – SONiC provides options that can leverage ASIC capabilities to the fullest

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Figure 2. The value proposition of SONiC
 

Where does SONiC fit in various use cases?


At a high level, the existence of a software feature on a SONiC-enabled system depends on the following three components:

1. SONiC operating system support – Community driven
2. SAI API support – Community driven
3. SDK support – Vendor driven

For a software feature to be built into SONiC, it needs to be facilitated at all the above layers to be fully productized. The current SONiC ecosystem is comprehensively built for IP/VxLAN and BGP based architectures. These technology components can be cross-leveraged to create any architecture of choice – whether it is a data center fabric or a CDN ToR. SONiC deployments today are predominantly observed in data centers and enterprises but can be easily extended to other networks that leverage similar technology components. Commonly deployed network roles and use cases with SONiC are outlined below:

Data center fabric and DCI – IP/VxLAN and BGP based:

1. Leaf (single and dual homed)
2. Spine
3. Super spine

These data center deployments are spread across different customer segments ranging from Tier1/Tier 2 hyperscalers, service providers, and larger enterprises.

Due to its strong community support, many working groups are collaborating on how to further extend SONiC for core and backbone use cases, amongst others. For example, the SONiC MPLS working group is looking at enabling MPLS and SR/SRv6 support for SONiC that are more applicable to WAN use cases.

SONiC in the real world


With all the benefits of an open-source NOS, network operators have many questions such as “Is SONiC the right fit for my use case?”, “How does support work?”, “How do I ensure code quality?”, “How do I train my team to build the skill set to manage SONiC?”, and the list goes on. Product adoption is always driven by customer experience. Any product or solution, open-source or not, will be successful only if it provides a seamless user experience. While the many merits of an open-source NOS are attractive, operators still want the security and partnership of a vendor NOS when it comes to support and field deployments. So how do we achieve the best of both worlds?

Network operators assessing SONiC either have a very strong self-driven ecosystem equipped to handle an open NOS or they’re trying to understand the deployability of an open NOS. Operators with a self-sufficient ecosystem tend to gravitate towards customized SONiC to suit their specific network requirements. This might involve customizing community SONiC to create a private distribution (BYO – build your own) or they can rely on external vendors that create commercial distributions built from community SONiC. On the other hand, operators trying to gain more experience with open NOS for relatively simpler use cases might want to rely on community SONiC, where there’s a fine balance in retaining the open-source nature of SONiC along with its validation on vendor hardware.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Figure 3. SONiC consumption model

While assessing a network rollout, there are certain evaluation criteria that an operator needs to consider. These evaluation criteria are independent whether the network solution in place is open or closed but depending upon the target ecosystem the responses to these criteria might differ.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Table 1. SONiC deployment evaluation criteria

The Cisco 8000 Series advantage


The high-performance Cisco 8000 Series of routers and switches is based on the Cisco Silicon One ASIC, making these devices three times more power efficient and twice as dense as industry incumbents. A wide variety of fixed and modular form-factors are available, while its power savings, run-time completion efficiency, and SDK portability offer unique advantages of the Cisco 8000 that greatly facilitate SONiC onboarding. As a strategic investment, every new platform is compatible with SONiC for the ability to leverage one silicon and one software end-to-end in different roles across use cases.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Figure 4. SONiC – The Cisco advantage

Support


The saying “With great power comes great responsibility” aptly applies to any open-source ecosystem. When deploying a production network, every operator is looking for holistic triage, faster resolution, predictable SLAs, and accountability. So how does this apply to SONiC?

Operationalizing SONiC on vendor hardware can be visualized as three layers. The bottom two layers consist of vendor-specific components – hardware systems at the very bottom followed by the infrastructure software that consists of SAI APIs, SDK, BSP/platform drivers, and other glue logic to seamlessly abstract hardware intricacies from the overlying operating system. By itself, SONiC looks like a constellation of open-source components and custom code, depending on whether customized SONiC is in play or not. With plug and play, accountability still sits with respective stakeholders for their components, leading to a shared responsibility support model. For Cisco-validated SONiC, every shipping platform will go through intensive customer and use case centric testing, with major and minor release cadence for community SONiC. Major releases will support newer features while minor releases provide bug fixes.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Guides, Cisco News, Cisco
Figure 5. Shared responsibility support model

Source: cisco.com

Tuesday, 23 August 2022

Cisco Project, “An-API-For-An-API,” Wins Security Award

Enterprise software developers are increasingly using a variety of APIs in their day-to-day work. With this increase in use, however, it is becoming more difficult for organizations to have a full understanding of those APIs. Are the APIs secure? Do they adhere to the organization’s policies and standards?  It would be incredibly helpful to have a suite of solutions that provides insights to these questions and more. Fortunately, Cisco has introduced our An-API-For-An-API project to address these concerns.

Introducing

An-API-For-An-API (AAFAA) is a project that controls the end-to-end cycle for enterprise API services and helps developers, from code creation to deployment into a cloud, provisioning of API gateways, and live tracking of API use while the application is in production.  Leveraging APIx Manager, an open-source project from Cisco, it combines CI/CD pipelines where API interfaces are tested to enterprise (security) policies, automatic deployment of applications behind an API gateway in a cloud system, and dynamic assessment of the API service through.

Figure 1. provides an overview of how the various pieces of the AAFAA solution fit and work together. Let’s look at the pieces and what insights they each provide the developer.

Cisco, Cisco API, Cisco Certification, Cisco Tutorial and Material, Cisco Prep, Cisco Preparation, Cisco Project
Figure 1. AAFAA Suite

APIx Manager

The central piece of the AAFAA solution suite is an open-source solution, APIx Manager, which provides API insights to developers in the day-to-day developer workflow. APIx Manager creates a browser-based view that can be shared with the DevSecOps team for a single source of truth on the quality and consistency of the APIs – bridging a critical communication gap. All these features help to manage the API life cycle to provide a better understanding of changes to the APIs we use every day. These can be viewed either through the browser or through an IDE Extension for VS Code. APIx Manager can also optionally integrate with and leverage the power of APIClarity, which brings Cloud Native visibility for APIs.

By creating dashboards and reports that integrate with the CI/CD pipeline and bring insights into APIs, developers and operations teams can have a single view of APIs. This allows them to have a common frame of reference when discussing issues such as security, API completeness, REST guideline compliance, and even inclusive language.

APIClarity

APIClarity adds another level of insights into the AAFAA solution suite by providing a view into API traffic and Kubernetes clusters. By using a Service Mesh framework, APIClarity adds the ability to compare runtime specifications of your API to the OpenAPI specification. For applications that don’t yet have a defined specification, developers can compare an API specification against the OpenAPI or company specifications or reconstruct the Spec if it is not published.

Tracking the usage of Zombie or Shadow APIs in your applications is another critical security step. By implementing APIClarity with APIx Manager, Zombie and Shadow API usage is seen within the IDE extension for VS Code. Seeing when APIs drift out of sync with OpenAPI specifications or start to use Zombie and Shadow at runtime, especially in a Cloud Native application, is vital for the improvement of the security posture of your application.

Panoptica

Adding Panoptica to your AAFAA tool kit brings even more insights into your API usage and security posture. Panoptica provides visibility into possible threats, vulnerabilities, and policy enforcement points for your Cloud Native applications. Panoptica is an important solution as well for being a bridge between development and operations teams to bring security into the CI/CD cycle earlier in the process.

Let’s think about what this means from a practical, day-to-day standpoint.

AAFAA in Practice


As enterprise application developers, we are tasked with building and deploying secure applications. Many companies today have defined rules for applications, especially Cloud Native ones. These rules include things like using quality components, e.g., third-party APIs, and not deploy applications with known vulnerabilities. These vulnerabilities can come in the form of a wide variety of areas, from the cloud security posture, application build images, application configuration, the application itself, or the way APIs are implemented.

There isn’t anything new about this. How we achieve the goal of building and deploying secure applications has changed dramatically in the past several years, with the possibility of vulnerabilities ever increasing. This is where AAFAA comes into service.

AAFAA utilizes three main components in providing insights from the very beginning all the way until the end of an application development lifecycle:

- APIx Manager
- CI/CD pipelines & automatic deployment of applications, and
- dynamic assessments of the API service through APIClarity.

APIx Manager

With its built-in integration into development tools, such as VS Code, APIx Manager is the start of the journey into AAFAA for the developer. It allows developers to gain API security and compliance insights when they are needed the most. At the beginning of the development cycle. Bringing these topics to the attention of developers earlier in the development lifecycle, shifting them left, makes them a priority in the application design and coding process. There are many advantages to implementing a Shift-Left Security design practice for the development team. It is also a tremendous benefit for the Ops teams as they can now see, through APIx Manager’s Comparison functionality, when issues were addressed and if they were a developer, Ops, or joint problem that needed to be resolved or if there was something that still needs attention. From the beginning of the software development cycle to the end, APIx Manager is a key component of AAFAA.

CI/CD Pipeline & Automatic Deployment

With the speed at which applications are being produced and updates being rolled out as part of the Agile development cycle, CI/CD pipelines are how developers are used to working. When we thought about our API solutions, we wanted to bring insights into the workflow that developers already use and are comfortable with. Introducing another app that developers must check wasn’t a realistic option. By incorporating APIx Manager, for example, into the CI/CD pipeline, we allow developers to gain insights into API security, completeness, standard compliance, and language inclusivity in their already established work stream.

Cisco, Cisco API, Cisco Certification, Cisco Tutorial and Material, Cisco Prep, Cisco Preparation, Cisco Project
There continues to be tremendous growth in Cloud Native applications. Gartner estimates that by 2025, just a short three years away, more than 95% of new digital workloads will be deployed on cloud platforms. That’s an impressive number. However, as applications move to the cloud and away from platforms that are wholly controlled by internal teams, we lose a bit of insight and control over our applications. Don’t get me wrong, there are many great things about moving to the cloud, but as developers and operation professionals, we need to be vigilant about the applications and experiences we provide to our end users.

Dynamic Assessments

APIClarity is designed to provide observability into API traffic in Kubernetes clusters. As developers make the move to Cloud Native applications and rely more and more on APIs and clusters, the visibility of our application’s security posture becomes more obscured. Tools like APIClarity improve that visibility through a Service Mesh framework which captures and analyzes API traffic to identify potential risks.

When combined with APIx Manager, we bring the assessment level right to the developer’s workflow and into the CI/CD pipeline and the IDE, currently through a VS Code extension. By providing these insights into platforms, developers are already using, we are helping to shift security to the left in the development process and provide visibility directly to developers. In addition to security matters, APIx Manager provides valuable insights into other areas such as API completeness, adherence to API standards, as well as flagging company inclusive language policies.

As part of the An-API-For-An-API suite of tools, APIx Manager and APIClarity provide dynamic analysis and Cloud Native API environment visibility, respectively.

What Else?


Several teams here at Cisco have worked side-by-side to create AAFAA. It’s been great to see it all come together as a solution that will help developers and operations with visibility into the APIs they use. The AAFAA project has also been recognized with a prestigious CSO50 Award for “security projects or initiatives that demonstrate outstanding business value and thought leadership.” Please join me in congratulating the team for such a high honor for a job well done.

Source: cisco.com

Saturday, 20 August 2022

Optimize and secure transit fleet management with visibility to connected devices and secure remote access

Children have been singing “The wheels on the bus go round and round” since 1939. What’s new today is the tech that keeps those wheels rolling safely and on schedule.

Transit fleet operators work towards achieving on-time performance and vehicle reliability in order to attain safety, cost, and ridership goals. That requires deploying new technologies to improve operational efficiency and predictability. Who doesn’t like a bus service that’s on-time, reliable, safe to ride and has other perks such as free WiFi?

Some ways transit fleet operators are increasing operational efficiency include leveraging vehicle telematics, remotely connected devices in the vehicle, real-time vehicle location, and Internet of Things (IoT) sensors. Together these devices and information provide critical data to the operations center via the Cisco Catalyst IR1800 Rugged Series cellular and Wi-Fi router.

Some of the connected devices on buses today include:

➣ Computer-aided dispatch and automatic vehicle location (CAD/AVL). These transmit route and real-time location information so dispatchers can see if the bus is on time, ahead or behind schedule.

➣ Vehicle telematics to monitor engine temperature, oil pressure, emissions, fuel economy, etc. in support of predictive maintenance.

➣ Fare collection systems for plastic card or mobile payment.

➣ Passenger counting, which is useful for route capacity planning and complying with pandemic-related occupancy restrictions.

➣ IP security cameras that capture video triggered by events like doors opening and closing or the driver pressing a distress button in the event of a disturbance.

➣ Voice communications between the driver and dispatch center.

Operational efficiency takes a hit whenever one of these connected devices, IoT sensors or the vehicle telematics system stops working because buses are often simply taken out of service when issues like these are reported. If the CAD/AVL system goes offline, for example, the fleet operator can’t provide accurate ETAs to passengers on digital signs and online schedules. Loss of the fare collection system results in revenue loss for the transit agency as passengers ride for free. Loss of a video camera feed might prevent the counting of passengers or visibility of a potential safety threat as passengers enter and exit the bus. And an outage on a vehicle telematics system might result in a breakdown that could have been detected and prevented—inconveniencing passengers and requiring the operator to assign an on-call driver and replacement vehicle to take over the route. That’s costly and inconvenient. As fleet operators grow and the number of vehicles that need to be supported increases, these issues are further magnified.

Visibility and secure equipment access boost operational efficiency

Now, fleet operators can quickly detect, assess, and fix problems with connected equipment using the Cisco IoT Operations Dashboard. It’s a modular cloud service with a simple user interface to help operations teams view important data about the health and operational status of connected equipment and sensors, using the IR1800 cellular Wi-Fi router (see Figure 1).

Cisco Certification, Cisco Exam, Cisco Exam Prep, Cisco Prep, Cisco Skills, Cisco Jobs, Cisco News
Figure 1 – IoT Operations Dashboard

In the figure above, each dot represents a transit bus. A red dot indicates that one of the connected devices on the bus is malfunctioning. One click shows which system has the problem—such as an offline fare payment system, security camera or passenger counting system. With one click, the operator can learn about the status of connected devices on the bus as well as the router.

Cisco Certification, Cisco Exam, Cisco Exam Prep, Cisco Prep, Cisco Skills, Cisco Jobs, Cisco News

With another click the operator can learn more about the failing device and open a remote session to the device, using one of several industry standard protocols, to diagnose the problem or view the device details – providing a fast solution to many problems.

Cisco Certification, Cisco Exam, Cisco Exam Prep, Cisco Prep, Cisco Skills, Cisco Jobs, Cisco News

Secure equipment access protects sensitive data from intruders


IoT security is top of mind for critical infrastructure like transportation systems, and we’ve designed IoT Operations Dashboard with Secure Equipment Access (SEA) to connected equipment on the bus. Using this SEA capability, transit Operator employees, or third-party service technicians log into the IoT Operations Dashboard with multi-factor authentication through their browser and use it for remote access to connected devices using common protocols such as SSH, RDP, VNC, HTTP, or serial terminal interfaces, and can even use a native desktop application. And all communication is encrypted over the cellular & Wi-Fi router, preventing unauthorized access (see figure below). This is the essence and power of secure remote access. Lastly, the IoT Operations Dashboard enables operations teams to securely meet the scale demands of today’s fleet operators.

Cisco Certification, Cisco Exam, Cisco Exam Prep, Cisco Prep, Cisco Skills, Cisco Jobs, Cisco News
Figure 4 – Secure Equipment Access (SEA) schematic

To sum up, the payoff for being able to securely view, monitor, and troubleshoot all bus connected devices, and IoT sensors from one interface is increased operational efficiency and lower costs. It’s simpler than ever to make sure “the doors on the bus go open and shut, all around the town.” On time, and safely.

Source: cisco.com

Thursday, 18 August 2022

Networking Demystified: Why Wi-Fi 6E is Hot and Why You Should Care

Wi-Fi 6E is here and the worldwide Wi-Fi community is buzzing about it. But why is it a major change? What does it mean for people’s Wi-Fi experience and infrastructure vendors like Cisco? And why are Cisco engineers excited about the opportunities for innovation? Read on to learn about the details of 6E and how this technology transition can enhance your career too.

Wi-Fi 6E is More Than Just “A Bit More Spectrum”

At its heart, Wi-Fi 6E extends Wi-Fi to the 6 GHz band of the wireless spectrum. This may not sound very impressive if you know that Wi-Fi currently uses many other bands. Regulatory bodies, like the FCC in the US and ETSI in the European community, allocate to each radio technology the right to transmit in segments of the spectrum and specify the allowed transmission characteristics, such as maximum power or the shape and size of the signal. For example:

◉ In the 2.4 GHz band, Wi-Fi is allowed over a bit more of the 80 MHz of spectrum, with typically up to 3 channels, each 20 MHz-wide.

◉ In the 5 GHz band, Wi-Fi is allowed over up to 500 MHz of spectrum, which enables 25 20-MHz-wide channels. These channels can be configured to be larger, 40 or 80 MHz, at the cost of a lower count of possible non-overlapping channels—12 and 6 for 40 and 80 MHz respectively.

Larger channels are often preferred because they enable the concurrent transmission of more data—much like a larger water pipe carries more water by unit of time—resulting in higher capacity and a better experience for bandwidth-intensive applications like video and AR/VR.

However, even with these options, two neighboring Wi-Fi access points (APs) should not be on overlapping channels because their signals will collide unless one AP waits for the other to finish transmitting before commencing its own transmission. This issue reduces the performance of the overall system. In dense environments—like university lecture halls or enterprise conference rooms—there is always a difficult negotiation to be made between the need for more APs to accommodate more people and their devices by allocating them across many networking pipes, and the need to maximize the size of each AP channel which, in turn, limits the number of APs that can be in the range of each other.

In the US FCC domain, Wi-Fi 6E adds 1200 MHz of new spectrum, creating 59 20-MHz-wide channels, more than tripling the number of channels available. This is great news for any Wi-Fi-dense deployment.

Even in domains where the new allocation is narrower—for example, in Europe with the ETSI domain currently planning to allocate 500 MHz—the number of channels available to Wi-Fi doubles. This means that any place that had 40 MHz channels will soon be able to switch to 80 MHz channels, doubling the capacity and enabling a 1080P video to be upscaled to 4K while maintaining the same experience.

New Band, New Rules

The 6 GHz band was of course not waiting for someone to need it. The 6 GHz space is in fact composed of 4 sub-bands, defined as U-NII 5 to U-NII 8 in the US. All of them are already actively in use by fixed, outdoor devices such as ground-to-space satellite services and point-to-point microwave links. U-NII6 and U-NII 8 are also used by mobile devices—think cable television field trucks sending video back to the main station. Wi-Fi will need to share these spectrum spaces and avoid disrupting the incumbents. For this reason, the rules for Wi-Fi devices depend on the sub-band where they operate.

Cisco Exam, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Wi-Fi 6E, Cisco Certification, Cisco
Figure 1. 6 GHz allocation in the US (FCC domain)

In all 4 sub-bands, APs and clients can operate at a low power mode when located inside buildings. Lower power means shorter transmission distances and thus smaller Wi-Fi cells, but also higher chances that one AP or Wi-Fi client will not hear another unit well enough, causing packet losses or retries.

In two of the 4 sub-bands, APs and clients can operate at higher power—called Standard Power, with a max power comparable to Wi-Fi in part of the 5 GHz band—only if the APs first make sure that they are not disrupting an incumbent transmitter. This verification is not possible in UNII-6 and UNII-8 because, for example, it is difficult to predict where TV trucks will be at any one time, so only indoor and low power are allowed in those cases. But in UNII-5 and UNII-7 bands, for any outdoor operation and any operation at standard power, the AP must verify at boot time, and confirm every 24 hours, that it is not broadcasting on a frequency used by a fixed incumbent. The AP runs this verification by providing its geographical location to a central server—the Automated Frequency Coordinator, or AFC—that returns the 6GHz frequencies allowed in the immediate area. The maximum power allowed for Low Power Indoor (LPI) APs is half the max power of Automated Frequency Coordination (AFC) APs. And since client devices must operate at half the power of the APs, this power puzzle creates interesting Wi-Fi cell design challenges.

Power Spectral Density You Say?


The 6 GHz rules bring another interesting twist. In 5 GHz and 2.4 GHz, the transmission rules are driven by the notion of maximum Effective Isotropic Radiated Power (EIRP), which is the maximum quantity of energy emitted by a client or an AP. As the max EIRP is fixed, a system that transmits over a 20-MHz channel transmits more energy per unit of frequency (per MHz) than a system that radiates the same total amount of energy, but over a wider channel, for example, 80 MHz.

The idea is the same as a water hose. If your hose delivers 1 liter per second, it will spray less water per unit of surface if you spread the jet as a flat 3-meter-wide mist than if you focus the water, power washer style, over just a half square centimeter target. A direct, and sometimes hidden consequence of this rule is that if you set your AP channel to a width of 80 MHz (instead of 20 MHz), your cell size is mechanically reduced because the amount of signal available over each MHz of the channel at a given distance is now lower. A common way to express this reduction is to say that the signal-to-noise ratio (SNR), over each MHz of frequency, reduces as the channel width increases.

The Wi-Fi community expressed this concern when the 6 GHz allocation was being discussed by worldwide regulatory bodies. The great news is that the community was heard, and the rules are different for 6 GHz band. In this new band, the max power is no longer a ‘total max’ EIRP but is defined as max Power Spectral Density (PSD) or the max power per MHz—in the hose analogy, that’s the water delivered per unit of surface. This limit is per MHz and does not change as the channel width changes. In practice, this means that a 6 GHz system can send the same amount of energy per MHz in an 80 MHz channel as it would in a 20 MHz channel, and therefore that the cell size stays the same, regardless of the channel width. It just sends more total energy as the channel size increases.

Cisco Exam, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Prep, Wi-Fi 6E, Cisco Certification, Cisco
Figure 2. Power rule comparison between 5 GHz (left) and 6 GHz (right)

A New Golden Age for Wireless Engineering


Another exciting property of the new 6 GHz band is that…well, it is new. This may sound like a repeat, but what it really means is that the industry does not have to design compatibility rules for older devices.

In the 5 GHz band, for example, you may want the benefits of all the goodness of Wi-Fi 6, including efficient scheduling, extremely high throughput, and multi-user simultaneous transmissions, but your network may see older Wi-Fi 5 devices around or even older Wi-Fi 2 devices from the early 2000s. These were probably already obsolete 15 years ago, but the mere fact that they may be there forces all later versions of Wi-Fi, including Wi-Fi 6, to send frames that can be partially understood by older devices so they will detect transmissions and refrain from transmitting at the same time.

This problem does not exist in the new band, so it can be optimized for maximum performance. The clients still have to discover it, which again brings many interesting challenges. For example, scanning 25 channels in 5 GHz, then 59 more in 6 GHz, does not sound like a great idea for fast roaming between APs. So, the discovery mechanism has to have built-in intelligence. Similarly, you may want to keep 6 GHz for efficient traffic, such as your Augmented Reality applications, and send the less urgent traffic, like your background photo sync to the cloud, to the other bands. But this requires a clever exchange mechanism between the client and the AP on resources availability, traffic type, etc.

As you can see, there are a lot of opportunities to innovate and design wireless clients that can benefit from new 6E opportunities.

Join Cisco to Design the Future of Wi-Fi


At Cisco, we have been at the forefront of Wi-Fi innovation for more than two decades. Building the future of Wi-Fi starts by designing great access points, and smart engines to optimize the experience that wireless clients can gain from optimized networks. Engineers working at Cisco take pride in designing the smartest AI-driven Radio Resource Management engine on the market to dynamically assign channels and power levels to neighboring APs. This creates smooth continuous Wi-Fi coverage from small branch networks to large venues like Mobile World Congress, where 1500 APs and 75K+ simultaneous radio communication professionals expect nothing less than a perfect Wi-Fi experience. Other Cisco innovations include OpenRoaming to automate onboarding, and Fastlane+ to optimize the experience of your Apple iPhone and iPad in a Cisco Wi-Fi 6 network. The full list of Cisco wireless innovations would take a book to enumerate. And with a brand-new 6E band available on our new access points, the opportunities to innovate are bounded only by your imagination and talent.

Source: cisco.com

Tuesday, 16 August 2022

Are Cisco 300-430 ENWLSI Practice Tests Useful?

Like all IT certification exams, Cisco 300-430 ENWLSI has special traits and particularities that anyone aspiring to take this exam requires to take notice of, be it someone, anywhere in the world.

These involved peculiarities are not just essential in taking the final exam but evenly important to receiving a flying score on the first attempt and finally attaining the associated certifications.