Sunday 11 April 2021

Cisco IOS XE – Past, Present, and Future

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career

From OS to Industry-leading Software Stack 

Cisco Internetwork Operating System (IOS) was developed in the 1980s for the company’s first routers that had only 256 KB of memory and low CPU processing power. But what a difference a few decades make. Today IOS-XE runs our entire enterprise portfolio ̶ ̶ 80 different Cisco platforms for access, distribution, core, wireless, and WAN, with a myriad of combinations of hardware and software, forwarding, and physical and virtual form factors.

Many people still call Cisco IOS XE an operating system. But it’s more appropriately described as an enterprise networking software stack. At 190 million lines of code from Cisco—and more than 300 million lines of code when you include vendor software development kits (SDKs) and open-source libraries—IOS XE is comparable to stacks from Microsoft or Apple.  

During the transition of IOS XE to encompass the entire enterprise networking portfolio, within every four-month release cycle our global development team of more than 3000 software engineers averaged the introduction of four new products. IOS-XE now supports more than 20 different ASIC families developed by Cisco and other vendors. We develop over 700 new features per year. It’s a huge undertaking to get this done systematically. It requires the right development environment and software engineering practices that scale the team to the amount of code necessary for our product portfolio. 

Here is a look back at how the IOS XE software stack was conceived and the continuous evolution of its capabilities, based on the work of the Polaris team. The team is tasked with providing the right development environment for the current portfolio and the evolving needs of the emerging new class of products. 

IOS Origins 

The early releases of IOS consisted of a single embedded development environment that included all the functionality required to build a product. Our success comes from managing the growth of functionality and scaling configuration models, scaling performance, scaling the hardware support in a systematic though embedded systems centric manner.  

In 2004, Cisco developers built IOS XE for the Cisco 1000 Series Aggregation Services Router (ASR 1000) router family. IOS XE combined a Linux kernel and independent processes to achieve separation of the control plane from the data plane. In the new code and development model we introduced, we began the journey of moving to a database–centric programming model. From the first shipment of ASR 1000, every state update to the data path goes into and out of the in-memory database. 

In 2014, the IOS XE development team was put together to drive the software strategy for Enterprise Networking. The entire switching portfolio moved to IOS-XE with the industry-leading Catalyst 9000 range of products. The pivot to evolving IOS XE into a distributed scale-out infrastructure relied on our deep experience of in-memory databases with database replication capabilities and a full, remotely accessible graph database. The elastic wireless controller 9800 represents the successful introduction of these new capabilities.  

When the IOS XE development team was formed, there was a common misconception that small, low-end systems with tiny footprints couldn‘t share the same software with very large-scale systems. We have successfully disproved that. IOS XE now runs on everything from tiny IoT routers to large modular systems. It is proving to be a significant strength as we move forward since the ability to fit on small systems means improved efficiency that translates to better outcomes on larger systems. What started as a challenge is now a transformational strength. 

Why is a Stack Important? 

An OS is only a very tiny part of the full functionality of a complete software development environment. The IOS XE enterprise networking software stack features a deep integration of all layers with a conceptual and semantic integrity.  

IOS XE software layers include application, software development language, middleware, managed runtime, graph database, transactional store, system libraries, drivers, and the Linux kernel. Our managed runtime enables common functionality to be rapidly deployed to a large amount of existing code seamlessly. The goal of the development environment is to facilitate cloud native, single control, and a monitoring point to operate at enterprise scale with fine-grained multi-tenancy everywhere. 

The great value in having the same software is that you have the same software development model that all developers follow. This represents the internal SDK for Cisco Enterprise Networking software engineers. All of our standards-based APIs are a single, often automated translation away. The ability to get total system visibility and control is vital in the days ahead to get to a networking system that does not look like a set of independent point solutions. 

What is IOS XE? 

There are many types of systems that can be built by different competent teams attempting to solve the same problem. The guiding themes behind IOS XE include: 

Cisco IOS XE, Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career
◉ Asynchronous end-to-end, because synchronous calls can be emulated, if necessary, but the reverse is not true. On low-footprint systems it is key to optimizing performance. 

◉ Cooperative scheduled run-to-completion is how all IOS XE code functions. It utilizes our experience developing IOS to provide the most CPU-efficient choice and the best model for strongly IO-bound workloads. 

◉ It’s a deterministic system that make the root cause of issues easier to fix and makes stateful process restart support easier to design. 

◉ A lossless system, IOS XE depends on end-to-end backpressure rather than any loss of information in processing layers. Reasoning about how a system functions in the presence of loss is impossible.  

◉ Its transactional nature produces a deep level of correctness across process restarts by reverting deterministically to a known stability point before a current inflight transaction started. This helps prevent fate sharing and crashes in other cooperating processes that work off the same database. 

◉ Formal domain specific languages provide specifications that permit build-time and runtime checking.  

◉ Close-loop behavior provides resiliency by imposing positive feedback control on developed systems instead of depending on “fire and forget” open loop behavior. 

During the last seven years of development, the IOS XE team via the Polaris project has focused on the following areas. 

Developing Our Own Managed Runtime Environment

The team has developed a managed runtime that essentially allows processes to run heap–less with state stored in the in-memory database. The Crimson toolchain provides language integrated support for the internal data modeling domain–specific language (DSL), known as  The Definition Language (TDL). The use of type-inferencing facilitates a succinct human interface to generated code for messaging and database access. The toolchain integration with language extensions also enables the rapid addition of new capabilities to migrate existing code to meet new expectations. Deep support for a systematic move to increasing multi-tenancy requirements are part of this development environment.   

Graph Query/Update Language

The Graph Execution Engine (GREEN) gives remote query and update capabilities to the graph database. It’s a formal way to interact natively using system software development kits (SDKs). All state transfer internally is fully automatic. Changes to state are efficiently tracked to allow incremental updates on persistent long-lived queries. 

Integrated Telemetry

The Polaris team has deeply integrated telemetry into the toolchain and managed runtime to avoid error-prone ad hoc telemetry. The separation of concerns between developers writing code and the automation of telemetry is vital to operate at Cisco scale. Standards-based telemetry is a one-level translation. Native telemetry runs at packet scale. 

Graph State Distribution Framework

The Graph State Distribution Framework allows location independence to processing by separating the data from the processing software. It’s a big step towards moving IOS XE from a message-passing system to a distributed database system. 

Compiler-integrated Patching

Compiler-integrated patching provides safe hot patching via the managed runtime, with script-generated Sev1/PSIRT patches, it is a level of automation that makes hot-patching available to every developer. The runtime application of patches does not require a restart. 

With a software stack like the newest generation of IOS XE, developers can add functionality to separate application logic from infrastructure components. The distributed database provides location independence to our software. The completeness and fidelity of the entire software stack allows for a deeply integrated and efficient developer experience.

Source: cisco.com

Saturday 10 April 2021

Embrace the Future with Open Optical Networking

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career

Until recently, optical systems have been closed and proprietary. They come as a package that includes optics, transponders, a line system, and a management system. In the traditional optical architecture, these components were provided by a single vendor, and interfaces between those functions were closed and proprietary. While the concept of disaggregated or open optical components is not new, some components can now be optimized and sold separately. This enables providers to assemble a system themselves in the manner they choose.

There are several reasons why an operator would move in this direction. In most cases, it’s to enable a multi-vendor solution where you can mix and match devices from different vendors with the expectation that you have access to the latest and greatest innovation that the broad industry provides. This certainly aligns with the disaggregation trends we’ve seen in networks with software and white boxes and provides the benefits of access to the latest innovative technology for best-of-breed platforms.

By contrast, in an open Dense Wavelength Division Multiplexing (DWDM) architecture, we essentially have a disaggregated system – functional disaggregation, hardware and software, disaggregation to full system disaggregation. In this open model, all the components can potentially be managed (e.g. configured, monitored, and even automated) through a common software layer with the use of standard APIs and data models.

When looking at open architectures, an open line system from a network design point of view must support an “alien wavelength.” An alien wavelength is one that is transported transparently over a third-party line system or infrastructure. Alien waves enable the ability to add capacity to address increased bandwidth needs with no disruption of the current network in place. And the most important benefit of alien waves is the freedom it gives network operators to source their transponders from any vendor based on their business or technical criteria.

This is particularly important when you consider that transponders represent the majority of the cost of a DWDM system and are a key component in determining the overall efficiency of the network. This provides the operator with increased flexibility to deploy the next wavelength from any vendor that’s best-in-class.

Whether a provider continues with a fully closed system or a disaggregated approach depends on their network today and where they have a vision to go in the future.

When is a closed optical system beneficial?

◉ When network operators are looking for a turnkey solution. It’s pre-integrated, and the responsibility for fixing problems is very clear.

◉ When operators are willing to trade first cost (Optical Line System) for transponder cost, resulting in a pay-as-you-grow solution, but with a higher total cost of ownership.

When is an open (multi-vendor) optical system beneficial?

◉ When operators want to choose from all the industry has to offer. Best-in-breed is based on the operator’s definition – best OSNR performance, highest spectral efficiency, lowest power, least amount of space, lowest cost per bit, pluggability for router/switch integration, or standardization.

◉ By opening the architecture, competition and innovation are stimulated. This provides the operator with more choice.

◉ When the ability to leverage standardized APIs is available to create a consistent operational model across vendors.

Use cases for open networking

◉ The subsea market pushed for “open cables,” which enabled any vendor’s transponder to operate over a third-party line system already in place. This helped many operators increase their capacity on the subsea cable by moving to the latest transponder in the market.

◉ The long-haul market has already implemented open line systems, enabling multi-vendor leverage over a common infrastructure. In some cases, this has resulted in more than three vendors being deployed.

◉ Metro use cases, like Open ROADM, take standardization a step further with the ability to have multiple line system vendors working with coherent interface vendors on different ends of the same fiber and wavelength.

What about optics?

Datacenter interconnect, metro, and regional markets will be transformed with 400G OpenZR+ Digital Coherent Optics (DCO), because they have been standardized to insert into any optical, router, or switch platform. This plug-and-play option has never existed before and opens the optical networking market for DCO optics to be deployed in a ubiquitous manner based on the standards. Several options are listed in the diagram below, including the 400G QSFP-DD, which is either the Optical Internetworking Forum (OIF) 400G ZR; or the OpenZR+ (which supports Open Reconfigurable Optical Add-Drop Multiplexer (ROADM) on the line side); or the Open ROADM, which is a CFP2 format.

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Exam Prep, Cisco Preparation, Cisco Career

Standardization


There are several industry initiatives that will accelerate the adoption of open networking for optical systems. Open ROADM is a Multi-Source Agreement (MSA), which is an agreement between vendors to follow a common set of specifications. It’s supported by a group of 28 companies, including system and component vendors, as well as major operators across the globe.

There’s also the Telecom Infra Project (TIP), which is another MSA that focuses on specifications for point-to-point open line systems. TIP also started an initiative to define a common algorithm that can be used for optical network design and path computation, something impossible to do in closed and proprietary systems. There’s a group within TIP that’s also working on GNPy, which stands for Gaussian Noise modeling in Python and provides algorithms for route feasibility and analysis for optical networks. It does the Optical Signal to Noise Ratio (OSNR) calculations to validate if an optical channel is feasible through a given path in the network. This is a very promising initiative, and there are large carriers worldwide that are using it to model real-life networks.

The next one is OpenConfig, which is an industry working group that focuses on producing common data models based on Yet Another Next Generation (YANG) language for device management and configuration. It’s widely used by webscale companies, and it covers multiple technologies – routing, switching, and optical.

Other industry specifications include the ITU Telecommunication Standardization Sector (ITU-T) that defines the DWDM grid and interface specifications, Forward Error Correction (FEC) and digital wrappers, and the OIF, which defines specifications for DWDM interfaces.

Finally, the most important proof point for any industry initiative is network operator adoption. We already see strong interest and deployment of open optical systems, broad support for the industry initiatives mentioned above, and rapid adoption of the industry specifications that they are producing.

Source: cisco.com

Friday 9 April 2021

See Why Developers and Security Can Now See Eye-to-Eye

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Meet Alice, she is a developer at a fast-growing company that creates a face filter app. What is Alice’s worst fear? Seeing her competitor launch the newest filter into market first.  Their security team lead, Bob, would have probably hoped that her worst fear would be writing vulnerable code. More often than not, however, this is not top of mind for developers like Alice. So, as you’d expect, Alice and Bob sometimes have difficulties communicating with each other, due to different goals and drivers. Think speed vs. risk aversion.

In this blog we will walk through some awesome new features within AppDynamics with Cisco Secure Application. What we will do is simulate a Remote Code Execution (RCE) attack and what response Bob can take to help Alice launch her application quickly and with security top of mind.

What is a Remote Code Execution attack? What is the impact?


A RCE attack is an attacker’s ability to remotely execute arbitrary commands or code on a target machine or in a target process. Such a RCE vulnerability is an obvious security flaw in applications, and somewhat bad news to Alice, but much worse news to Bob (who is responsible for the security around this app). A program that is designed to exploit such a vulnerability is called an arbitrary RCE exploit. There are many libraries that developers use when developing their apps. Many of those have vulnerabilities in them.

Now, what can be the impact of such an attack? Imagine that a malicious actor, Eve, can execute arbitrary code commands inside of your application, without being physically present. Imagine Eve being able to read and write into your database, or take your application offline. Now you might have thought that you are safe, since you migrated your apps to the public cloud: how could anyone get in there? Well, with application-level attacks (like RCE) this is unfortunately still possible. So how can we have the comfort of the public cloud, but also visibility and control like never before?

AppDynamics with Cisco Secure Application


Cisco Secure Application protects applications at runtime, detects and block attacks in real-time, and simplifies the lifecycle of security incidents by providing application and business context. This creates a shared “language” across app and security teams, and makes it easier for them to communicate. It is natively built into AppDynamics Java agent (more languages to follow) and embeds security into the application runtime without adding performance overhead. Let’s have a look at our remote execution attack via the eyes of Bob, our AppSec expert, who is testing out Cisco Secure Application!

Below you can see the Vulnerabilities tab in the dashboard. Important here is that you can see the CVE with associated severity, but what’s more is that you can also see the status: has it been fixed or not. This is especially valuable information when triaging and prioritizing work. We can now focus on what still needs to be fixed first, and then check the others for potential compromises.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Secure App goes even further than this: we can also notice these 2 exclamation mark symbols, the first indicating that an exploit is possible for this CVE, and then second that a someone tried to compromise this vulnerability! Has Eve been able to do bad stuff in our application? We will need to act even faster on this vulnerability!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

When we click on this line, we are shown more detailed information about this vulnerability: as we can see this CVE-2017-5639 is a flaw in Apache Struts with incorrect exception handling, which allows remote attackers to execute arbitrary command via HTTP headers. Recognize this type of attack? Yes, it is indeed the worst nightmare of our AppSec manager Bob, and this has actually been done as well!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

We have to find out more about this compromise, so we can click on the attack and this will drill down further. What we can see now is truly amazing if we compare this to other classical security tools. Not only can Bob see the affected app, the affected service and the vulnerable library, Bob can also see the actual misused Java method and the stack trace!

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

When Bob checks out the stack trace you can actually scroll through the node’s entire stack trace and associated errors. This can be essential when investigating what has happened, and if certain database calls have been made.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Now when Bob checks out the details of this page, you can see the command that has been tried to execute, the method name and working directory. As you can see, Eve had tried to show the contents of the /etc/passwd file! Was Eve able to see into this precious file?

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

In Cisco Secure Application, you can set policy in either Detect or Block mode. Luckily, we can see that this action was blocked by the policy used (lowest policy in list). This was good thinking of Bob! Using all of the gathered information, Bob can now show exactly what needs to be changed in Alice’s code. Secure App is now providing a common tool which both parties understand. Alice and Bob worked happily ever after.

Cisco Preparation, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Career, Cisco Study Material, Cisco Security

Source: cisco.com

Thursday 8 April 2021

Designing Fault Tolerant Data Centers of the Future

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material

System crashes. Outages. Downtime.

These words send chills down the spines of network administrators. When business apps go down, business leaders are not happy. And the cost can be significant.

Recent IDC survey data shows that enterprises experience two cloud service outages per year. IDC research conservatively puts the average cost of downtime for enterprises at $250,000/hour. Which means just four hours of downtime can cost an enterprise $1 million.

More Info: 300-425: Designing Cisco Enterprise Wireless Networks (ENWLSD)

To respond to failures as quickly as possible, network administrators need a highly scalable, fault tolerant architecture that is simple to manage and troubleshoot.

What’s Required for the Always On Enterprise

Let’s examine some of the key technical capabilities required to meet the “always-on” demand that today’s businesses face. There is a need for:

1. Granular change control mechanisms that facilitate flexible and localized changes, driven by availability models, so that the blast radius of a change is contained by design and intent.

2. Always-on availability to help enable seamless handling and disaster recovery, with failover of infrastructure from one data center to another, or from one data center to a cloud environment.

3. Operational simplicity at scale for connectivity, segmentation, and visibility from a single pane of glass, delivered in a cloud operational model, across distributed environments—including data center, edge, and cloud.

4. Compliance and governance that correlate visibility and control across different domains and provide consistent end-to-end assurance.

5. Policy– driven automation that improves network administrators’ agility and provides control to manage a large-scale environment through a programmable infrastructure.

Typical Network Architecture Design: The Horizontal Approach

With businesses required to be “always on” and closer to users for performance considerations, there is a need to deploy applications in a very distributed fashion. To accomplish this, network architects create distributed mechanisms across multiple data centers. These are on-premises and in the cloud, and across geographic regions, which can help to mitigate the impact of potential failures. This horizontal approach works well by delivering physical layer redundancy built on autonomous systems that rely on a do-it-yourself approach for different layers of the architecture.

However, this design inherently imposes an over-provisioning of the infrastructure, along with an inability to express intent and a lack of coordinated visibility through a single pane of glass.

Some on-premises providers also have marginal fault isolation capabilities and limited-to-no capabilities or solutions for effectively managing multiple data centers.

For example, consider what happens when one data center—or part of the data center—goes down using this horizontal design approach. It is typical to fix this kind of issue in place, increasing the time it takes for application availability, either in the form of application redundancy or availability.

This is not an ideal situation in today’s fast-paced, work-from-anywhere world that demands resiliency and zero downtime.

The Hierarchical Approach: A Better Way to Scale and Isolate

Today’s enterprises rely on software-defined networking and flexible paradigms that support business agility and resiliency. But we live in an imperfect world full of unpredictable events. Is the public cloud down? Do you have a switch failure? Spine switch failure? Or even worse, a whole cluster failure?

Now, imagine a fault-tolerant data center that automatically restores systems after a failure. This may sound like fiction to you but with the right architecture it can be your reality today.

A fault-tolerant data center architecture can survive and provide redundancy across your data center landscapes. In other words, it provides the ultimate in business resiliency, making sure applications are always on, regardless of failure.

The architecture is designed with a multi-level, hierarchical controller cluster that delivers scalability, meets the availability needs of each fault domain, and creates intent-driven policies. This architecture involves several key components:

1. A multi-site orchestrator that pushes high-level policy to the local data center controller—also referred to as a domain controller—and delivers the separation of fault domain and the scale businesses require for global governance with resiliency and federation of data center network.

2. A data center controller/domain controller that operates both on-premises and in the cloud and creates intent-based policies, optimized for local domain requirements.

3. Physical switches with leaf-spine topology for deterministic performance and built-in availability.

4. SmartNIC and Virtual Switches that extend network connectivity and segmentation to the servers, further delivering an intent-driven, high-performing architecture that is closer to the workload.

Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
Nexus Dashboard Orchestrator


Designing Hierarchical Clusters

Using a design comprised of multiple data centers, network operations teams can provision and test policy and validate impact on one data center prior to propagating it across their data centers. This helps to mitigate  propagation of failures and unnecessary impact on business applications. Or, as we like to say, “keep the blast zone aligned with your application design.”

Using hierarchical clusters provides data center level redundancy. Cisco Application Centric Infrastructure (ACI) and the Cisco Nexus Dashboard Orchestrator enable IT to scale up to hundreds of data centers that are located on-premises or deployed across public clouds.

To support greater scale and resilience, most modern controllers use a concept known as data sharding for data stored in the controller. The basic theory behind sharding is that the data repository is split into several database units known as shards. Data stored in a shard is replicated three or more times, with each replica assigned to a separate compute instance.

Typically, network teams tend to focus on hardware redundancy to prevent:

1. Interface failures: Covered using redundant switches and dual attach of servers;

2. Spine switch failure: Covered using ECMP and/or multiple spines;

3. Supervisor, power supply, fan failures: Every component in the system has redundancy built into most of the systems; and

4. Controller cluster failure: Sharded and replicated, thereby covering multiple cluster node failure.

Network operations teams are used to designing multiple redundancies into the hardware infrastructure. But with software-defined everything, we need to make sure that policy and configuration objects are also designed in redundant ways.


Cisco Prep, Cisco Preparation, Cisco Learning, Cisco Exam Prep, Cisco Tutorial and Material
BGP Policy

The right way to define intent is to split the network policy—either via Orchestrator or API—in a way that ensures changes are localized to a fault domain as shown by option A (POD level fault domain) or option B (Node level fault domain). Cisco’s Nexus Dashboard Orchestrator enables pre-change validation to show the impact of the change to the network operator before any change is committed.

In case of failure due to configuration changes, the Cisco Nexus Dashboard Orchestrator can roll back the changes and quickly restore the state of the data center to the previously known good state. Designing redundancy at every hardware and software layer enables NetOps to manage failures in a timely manner.

Source: cisco.com

Wednesday 7 April 2021

FlashStack Data Protection with Veeam: A New Cisco Validated Design

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career

Delivering an optimal user experience for business-critical applications is a non-negotiable element for successful businesses. Architecting infrastructure that meets application and SLA requirements is vital to delivering the superior performance on which great user experiences rest. Today, this infrastructure is often built with the latest compute technology, high-performance flash storage arrays, and enterprise networking. Combining modern data protection and infrastructure is also key to availability, because pairing data protection with the right backup infrastructure can help an organization respond to its unique demands.

Unfortunately, deploying on-premises infrastructure can be complex, time consuming and costly. This is where converged infrastructure from FlashStack comes in. Built in partnership by Cisco and Pure Storage, FlashStack offers everything a modern infrastructure platform needs—more simplicity, more flexibility, and more speed. FlashStack delivers cloudlike experiences and economics to your data center through easy adoption, unified management, and fewer siloes.

FlashStack Data Protection with Veeam

Now, Cisco and Pure Storage have partnered with Veeam—a consistent Leader in the Gartner Magic Quadrant for Data Center and Recovery Solutions—to build a Cisco Validated Design (CVD) that provides a complete set of data protection options for FlashStack. These options use Pure FlashArray//C, Cisco UCS C240 AFF Rack Server or UCS S3260 Storage Server, and Veeam software. FlashStack with Veeam Data Protection provides an end-to-end solution that includes backup and archive to on-premises and public clouds.

These new CVDs offer three target architectures, depending on restore requirements, backup throughput, storage efficiency and capacity, as shown in Figure 1.

FlashStack with Veeam data protection combines two solutions required to deliver optimal user experiences:

Converged infrastructure meets modern data protection

Cisco Prep, Cisco Learning, Cisco Certification, Cisco Preparation, Cisco Career

Figure 1  FlashStack backup environment with three potential backup targets

FlashStack provides pre-integrated, pre-validated converged infrastructure that combines compute, network and storage—as I mentioned earlier—into a platform designed for business-critical applications and a wide variety of workloads. This platform delivers maximum performance, increased flexibility, and rapid scalability. And it enables rapid, confident deployment as well as reducing the management overhead consumed by patches and updates.

 Modern infrastructure also needs modern data protection, and Veeam’s data protection platform integrates backup and replication with advanced monitoring analytics and intelligent automation. This platform works with FlashStack to deliver performance and features to help ensure that your data and applications are available while also unlocking the power of backup data.

Depending on your requirements, you can choose from several infrastructure platforms on which to run your data protection environment:

FlashArray//C: fast restores with storage efficiency (dedupe and compression)

Veeam, with FlashArray//C from Pure Storage and Cisco UCS C220 M5 servers, delivers maximum flash-based performance that can handle multiple workloads, while paired with Pure Storage data efficiency features. This solution offers storage capacity without compromise, along with flash-based performance at close to disk economics. It targets multiple workloads and large scale deployments such as:

◉ All-QLC flash storage for cost-effective, capacity-oriented workloads

◉ Advanced data services and technologies for guaranteed data efficiency

◉ Scale-up, scale-out architecture to meet the capacity expansion requirements of data-intensive workloads

◉ Non-disruptive, Evergreen architecture that eliminates risky, complex, and costly upgrades

C240 AFF: fast restores and high backup throughput

Veeam, with Cisco UCS C240 M5 all-flash storage servers, delivers the performance and flexibility needed to run and support virtually any workload, while meeting the requirements of a sophisticated data protection environment.  It features:

◉ Architectural and compute flexibility

◉ Multiple workload capacity

◉ Best-in-class backup and restore performance

◉ Scale-out capability

S3260: Dense platform with optimal restores and high backup throughput

Veeam, with Cisco UCS S3260 M5 storage servers, delivers superior performance with massive scale-up capability and disk economics. This solution includes Cisco Intersight or UCS Manager to reduce cost of ownership, simplify management, and deliver consistent policy-based deployments and scalability.

This dense storage platform, combined with FlashStack and Veeam, offers massive storage capacity and high backup throughput for multiple workloads. You can run Veeam components such as Proxy, Console and Repository on a single compute and storage platform with the ability to scale both compute and storage through Veeam scale-out backup repositories

You can deploy a scale-out backup storage platform on a cluster of Cisco UCS S3260 storage servers, providing an S3 archive target for the Veeam cloud tier. This tier features scale-out backup repository architecture, which makes it possible to move older backup files to more cost-effective cloud or on-premises object storage. Archiving backup in the cloud tier can result in up to 10X savings on long-term data retention costs and help you align with compliance requirements by storing data as long as needed.

Power and data protect your business-critical applications

Organizations are upgrading their infrastructure to accelerate innovation, increase agility, and reduce complexity while enabling rapid scalability. FlashStack brings the latest in compute, network, and storage components together in a single, pre-validated architecture that speeds time to deployment, reduces overall IT costs and deployment risk, and is tailored for specific workloads.If you’re an existing FlashStack customer or use other backup solutions, check out the following links to learn more about how this Cisco Validated Design can power and protect your applications and help you consistently deliver optimal user experiences for the applications that contribute to your success.

Tuesday 6 April 2021

Unlock the potential of Application Hosting on Catalyst Access Points – A use case overview

Application Hosting Overview

With the 17.3.1 IOS XE release, Cisco introduced the Application Hosting on Access Point feature. In conjunction with Cisco DNA Center support starting from version 2.1.1, Application hosting on Cisco Catalyst 9100 series Access Points enables developers to create and host applications using Docker style container apps. The Cisco Catalyst series Access Points, through their modular capability and IOx framework, facilitate flexibility for third party software and hardware integration.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career
Application Hosting Topology

Let’s look into on how this feature solves common use cases and brings value to the customer.

Solution Components


The key components of the solution are the Cisco DNA Center, Catalyst 9800 Series WLC and the Catalyst 9100 Series Access Points. Each component plays a specific role as described below:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

Use Cases


Application Hosting on Access Points enables you to develop your own IoT applications while leveraging the IoT capability offered by the Cisco Catalyst Series Access Points. Take a look at some of the common IoT use cases which can leverage Application Hosting:

◉ Retail Store using Electronic Shelf Labels (ESL) for dynamic price automation

◉ Asset monitoring and tracking in healthcare and manufacturing verticals

◉ Smart office monitoring for desk and room occupancy, temperature, humidity and air quality monitoring, window and door monitoring

◉ Building management system (BMS) automation to connect and manage door locks, smart thermostats, lights, and other IoT devices

The above use cases are just a microcosm of the different verticals where Application Hosting comes into play.

Retail Store IoT Use Case

For the purpose of this article, we will focus on one of the most common use cases utilizing the Application Hosting feature on Access Points: Retail Store IOT using Electronic Shelf Labeling (ESL)

What is Electronic Shelf Labeling?

An Electronic Shelf Label is a device that shows a product’s data and price information on its display. Unlike printed labels, the information gets automatically updated if certain criteria like price or product data changes. Besides the increased flexibility in price design, ESL helps simplify processes for store personnel and eliminates need for manual price changes.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

This Retail Store IOT ESL retail use case gets addressed via our full stack Application Hosting on Access Points solution, as shown below.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

What is an ESL IoT Application?

◉ Cisco partner developed 3rd party IOx application hosted on the Access Point that communicates with ESL tags through an ESL capable USB connector device.

How do we accomplish the ESL solution?

◉ ESL tags deployed throughout the store communicates with compatible ESL IoT applications deployed on the Cisco Access Points. The Cisco DNA Center manages the deployed ESL application on the Cisco Access Point and provides an organized end-to-end solution.

◉ The ESL tags are managed by a partner developed ESL management system (On Prem or Cloud). As an example, one of our partners, SES-Imagotag manages its ESL tags either via their On Prem server or a Cloud based solution.

Next, let’s look at an overview of how an IOx application targeting the Retail IoT ESL use case gets deployed on the Access Point. For the sake of simplicity, we assume here that customer already has the Cisco DNA Center Appliance available and the Access Points are already discovered in the Cisco DNA Center. We work with the partner to help them develop their custom ESL IOx app. Typically this is done via the detailed instructions available at Cisco DevNet.

Solution Overview – How does it work?

The following figure highlights the Application Hosting workflow on the Cisco DNA Center which has this solution enabled starting release 2.1.1.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

Cisco users can avail of the detailed steps in the following deployment guide to install and deploy a 3rd party IOx app that will be hosted on the Cisco Access Points. Once deployed on the Cisco DNA Center, the application can be managed via the following options highlighted below:

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

Additional Use Cases

Another common use case that leverages the Application Hosting capabilities is the Building Management System (BMS). BMS can be used to connect and manage critical building infrastructure such as door locks, smart thermostats, lights, sensors and other IoT devices.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

Using the Application Hosting framework, Cisco’s partners can create custom applications catered for their BMS use cases. The custom device management software residing inside the application container on Catalyst Access Points communicates with the building management devices and allows facilities to be managed by a BMS application server. The process is inherently automatable, thus providing operational cost savings.

Refer below a sample topology of the BMS IoT use case. This use case is enabled with a custom USB dongle attached to the Access Point to communicate with the building management devices and managed by an external BMS IoT Management system.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation, Cisco Career

We can clearly see from the above examples that Application Hosting on Catalyst Access Point can enable different use cases and bring tremendous value to our customers.

Customer Feedback


Application Hosting on Access Point feature has been warmly received by Cisco’s partner ecosystem. Cisco initially partnered with SES-Imagotag and conducted an Early Field Trial with REWE International. REWE International is rolling out a containerized version of the SES-Imagotag ESL solution using the application hosting feature. The containerized application will enable REWE International to eliminate the need for an IoT overlay network, simplifying their deployments, streamlining management, and saving them time and money. The EFT was a resounding success as evinced by the glowing feedback from Hans Vasters, Senior Network Architect, REWE International:

◉ “App hosting capabilities on the Cisco access points reduces deployment times by nearly 90 percent by eliminating the need to install additional hardware and bring in IT folks and electricians to set it all up.”

◉ “With App Hosting, we run everything through one system, and Cisco DNA Center enables us to push out the application, make changes and updates, and manage the application across all our stores seamlessly. Our technicians don’t have to invest time onsite to maintain a separate infrastructure. It can all be done remotely.”

◉ “Installation is very easy, it’s just a few clicks. Cisco DNA Center also lets me see when the app is up and running, gives me the status of all access points, lets me know if the application was distributed successfully, and if the container is up and running. That’s a huge advantage because if I think about the effort to distribute software to the stores, Application Hosting makes it quite easy.”

Application Hosting on the Cisco Catalyst 9100 Access Points enables Cisco to extend capabilities of the platform and provide convergence of Wi-Fi and IoT on a single network. Multiple partners have signed up and are on their way to developing their own custom IOx Apps. We are only getting started with the Application Hosting on the Access Point journey!

Monday 5 April 2021

Intersight Kubernetes Service (IKS) Now Available!

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation

We announced the Tech Preview of Intersight Kubernetes Service (IKS) which received tremendous interest. Over 50 internal sales teams, partners and customers participated and provided valuable recommendations and great validation for our offering and strategic direction. Today we are pleased to announce the general availability of IKS!

Read More: SaaS-based Kubernetes lifecycle management: an introduction to Intersight Kubernetes Service

Intersight Kubernetes Service’s goal is to accelerate our customers’ container initiatives by simplifying the management effort for Kubernetes clusters across the full infrastructure stack and expanding the application operations toolkit. IKS provides flexibility and choice of infrastructure (on-prem, multi-hypervisor, bare metal, public cloud) so that our customers can focus on running and monetizing business critical applications in production, without having to worry about the challenges of open-source or figuring out the mechanics to manage, operate and correlate between each layer of the infrastructure stack.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation
With Cisco Intersight it can be easy

For IT admins and infrastructure operators IKS means an easy – almost hands-off – secure deployment and comprehensive lifecycle management of 100% open source Kubernetes (K8s) clusters and add-ons, with full-stack visibility from the on-prem server firmware and management up to the K8s application. Initially, ESXi targets will be supported, with bare metal and public cloud integrations coming soon, along with many other features, such as adopted clusters, multi-cluster and vGPU support.

For DevOps teams IKS is so much more than just a target to deploy K8s-based applications.  As a native service of the Intersight platform, DevOps engineers can now benefit from the recently announced HashiCorp partnership and brand new Intersight Service for HashiCorp Terraform, deploying their applications using Infrastructure as Code (Iac) and Terraform. They can also benefit from the native Intersight Workload Optimizer functionality, which means complete mapping of interdependencies between K8s apps and infrastructure, and AIOps-powered right-sizing (based on historical utilization of resources) and auto-scaling.

Let’s take a look at what IKS enables in a bit more detail:

A common platform for full-stack infrastructure and K8s management


The modern challenges for IT admins and infrastructure teams is navigating a hyper-distributed, extremely diverse IT landscape: hybrid cloud infrastructure with on-premises locations (data centers, edge, co-lo) and multiple clouds, heterogeneous stacks and workload requirements (bare metals, virtual machines, containers, serverless), and the need for speed to cater for internal customers (DevOps, SecOps, other IT and LoB users) and ultimately end-users!

The only way to address this complexity is to simplify with a unified, consistent cloud operating model and real-time automation to balance risk, cost and control. This is where Cisco Intersight comes in. Cisco Intersight is a common platform for intelligent visualization, optimization, and orchestration for applications and infrastructure (including K8s clusters/apps). It enables teams to automate and simplify operations, use full-stack observability to continuously optimize your environment, and work better and faster with DevOps teams for cloud native service delivery.

Cisco Prep, Cisco Learning, Cisco Tutorial and Material, Cisco Preparation
Intersight – The world’s simplest hybrid cloud platform

With IKS and other Intersight services, IT admins can easily build an entire K8s environment from server firmware management, to the hyperconverged layer, to deploying clusters in a few clicks via the GUI or directly using the APIs – and now with Terraform code! In addition, Intersight provides common identity management (SSO, API security), RBAC (two new roles for K8s admins and K8s operators) and multi-tenancy (server/hyperconverged/K8s layers) to support customers looking for a secure, isolated, managed and multi-tenant K8s platform.

IKS regular releases ensures that IT Admins can effortlessly keep K8s versions, add-on versions and security fixes up to date on their clusters. We curate, harden for security and manage essential and optional add-ons (CNI, CSI, L4 and L7 load balancer, K8s dashboard, Kubeflow, monitoring etc) to provide production grade tools to our customers. Those IKS features allow customers to deploy and consume secure, consistent and reliable open-source K8s integrations without becoming CNCF landscape experts, and while maintaining the flexibility to port any other open-source components. Demo video available here.

Continuous Delivery for Kubernetes clusters and apps


IKS supports multiple options to integrate Kubernetes resources into customers’ continuous delivery pipelines, saving precious time and effort in configurations and development. Users can use OpenAPI, python SDK or Intersight Terraform provider. This makes it easy to integrate IKS with customers’ existing Infrastructure as Code (IaC) strategies.

In addition, the Cisco Intersight Service for HashiCorp Terraform (IST) now makes it even simpler to securely integrate their on-prem environments and resources with their IaC plans – a result of our partnership with HashiCorp.

For many, however, the preferred way is to continuously deploy application Helm charts to the clusters. To address this requirement, another IKS feature we will be adding soon will be a Continuous Delivery toolkit for Helm charts, equipping customers with yet another mechanism to deploy and manage their application on their K8s platform.

Full-stack app visualization, AIOps rightsizing and intelligent top-down auto-scaling


Another important Intersight native service that IKS benefits from is Intersight Workload Optimizer (IWO). By installing the IWO agent helm chart on IKS tenant clusters, customers benefit from a comprehensive observability and automation toolkit for their K8s platforms, freeing them to focus on what matters: onboarding application teams and increasing K8s adoption.

Today IWO with IKS works in 3 ways:

◉ First, with IWO, customers can gain insights with interdependency mapping between K8s apps across virtual machines, servers, storage and networks, for simplified, automated troubleshooting and monitoring.

◉ Second, IWO allows DevOps teams to right-size K8s applications without the labor of manually pouring over the real-time traffic data patterns against configured limits, requests or namespace quota constraints, in order to identify the optimal CPU and memory thresholds for horizontal and vertical pod auto-scaler. Instead, IWO automatically detects thresholds based on user-configured policies.

◉ Finally, IWO enables intelligent, top-down auto-scaling – from the K8s app, to the cluster, to the infrastructure layer. Typically, DevOps teams use the Kubernetes default scheduler to handle fluctuating demand for their applications. While this is ok with the initial pod placement, it doesn’t help during the lifecycle of the pod, where actions might need to be taken due to node congestion or low traffic demand. IWO automatically and continuously redistributes IKS workloads and pods to mitigate that node congestion or optimize under-usetilized infrastructure. This results in better scaling decisions.

Source: cisco.com