Thursday, 17 February 2022

Cisco MDS 64G SAN Analytics: Architecture evolution

Cisco MDS 64G SAN Analytics, Cisco Exam Prep, Cisco Learning, Cisco Preparation, Cisco Skills, Cisco Jobs

Cisco recently announced software availability of NX-OS 9.2(2) with support for SAN Analytics on the Cisco MDS 9700 Series switches with 64G Modules. This software release begins the next phase in the architecture evolution of SAN Analytics.

In this blog we will do a high-level comparison of SAN Analytics Architecture between the Cisco MDS 32G and 64G platforms and look at some of the new innovations of Cisco MDS 64G SAN Analytics.

But first, let’s cover methodologies used for performance monitoring. Utilization, Saturation and Errors (USE) is a generic methodology for effective performance monitoring of any system. The USE metrics identify performance bottlenecks of a system. In the context of a storage system, we can add Latency as an additional element into the USE methodology to create LUSE. A full visibility into LUSE metrics of a storage infrastructure is critical for performance monitoring and troubleshooting.

SAN Analytics and SAN Insights are advance features of the Cisco MDS 32G switches since NX-OS 8.3(2):

◉ SAN Analytics is an advance feature of Cisco MDS switches that collects storage I/O metrics from switches independent of host and storage systems. Over 70 metrics are collected per-port, per-flow (ITL/ITN) and streamed out. These metrics can be classified into one of the ‘LUSE’ categories.

◉ SAN Insights is a capability of Cisco Nexus Dashboard Fabric Controller (Formerly DCNM) SAN that receives the metrics stream from SAN Analytics. It provides the visualization and analysis of fabric wide I/O metrics using the ‘LUSE’ framework.

Cisco MDS 32G SAN Analytics

Access Control Lists (ACL) enforce access control on every frame switched by the ASIC. The ACLs are matched extracting certain fields from the frame header and on a match the action corresponding to the entry is taken. On an F-port, FC Hard Zoning entries are programmed as ACLs in the ingress direction based on Zoning configuration to match on the frame SID and DID with an action to “forward” the frame to the destination.

On Cisco MDS 32G switches, the I/O metrics are computed by capturing FC frame headers in the data path using an ACL based ‘Tap’ programmed in the ASIC on ingress and egress direction of the analytics enabled ports. These Tap ACLs match on frames of interest for Analytics viz. CMD_IU, 1st DATA_IU, XRDY_IU, RSP_IU and ABTS. A copy of the frame matching the Tap ACL is forwarded to an on-board NPU connected to the 32G ASIC.

When SAN analytics is enabled on a port, the ACLs are programmed depending on the port type and direction as shown in Figure 1 below:

◉ F_Port Ingress: Analytics Tap ACLs + Zoning ACLs

◉ F_Port Egress, E_Port Ingress, E_Port Egress: Analytics Tap ACLs only

Cisco MDS 64G SAN Analytics, Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Skills, Cisco Jobs
Figure 1: Port Analytics Tap and Zoning
 
The Cisco MDS 32G NPU software Analytics Engine can be modified to accommodate custom metrics (Eg: NVMe Flush command metrics) or futuristic storage command sets (Eg: NVMe-KV) with the required ACL Taps in place.

Cisco MDS 64G SAN Analytics


The Analytics Engine moves into the ASIC on Cisco MDS 64G switches, giving it a hardware acceleration. The Cisco MDS 64G Module has two 64G ASICs and each ASIC has six hardware Analytics Engines (one for every four ports). These Analytic Engines can compute I/O metrics at line rate on all ports simultaneously with capacity to analyze upwards of 1 billion IOPS per Module. The hardware Analytics Engines have built-in Taps and do not need the ACL based Taps to be programmed.

The metrics computed by hardware Analytics Engines are stored in a database inside the ASIC and periodically flushed to the NPU. The NPU runs a lightweight software process on top of DPDK (an open source highly efficient and fast packet processing framework) that collects and accumulates the metrics pushed periodically from the hardware Analytics Engine. Even though the NPU does not run an Analytics Engine, it maintains the persistent metrics database per-flow and remains the critical element of the solution. The shipping of metrics from the NPU database to the Supervisor is identical to the Cisco MDS 32G Architecture. The Cisco MDS 64G hardware Analytics Engine does not preclude a NPU software Analytics Engine to be enabled in a future software release for flexibility and programmability benefits.

A comparison of the Cisco MDS 32G and MDS 64G architectures are shown in Figure 2 below:

Cisco MDS 64G SAN Analytics, Cisco Exam Prep, Cisco Preparation, Cisco Learning, Cisco Skills, Cisco Jobs
Figure 2: Cisco MDS 32G and MDS 64G SAN Architectures

The Cisco MDS 64G hardware Analytics Engine computes some additional metrics for deeper I/O visibility:

◉ Multi-sequence write I/Os are large writes involving multiple XRDY sequences. The write exchange completion time for these writes include delays introduced by the Host (Rx XRDYn to Tx first DATAn+1) and the Storage (Rx Last DATAn-1 to Tx XRDYn). These metrics provide better analysis and accurate pinpointing of large write performance issues. The Analytics Engine separately tracks:
    ◉ Avg/Min/Max host write delay
    ◉ Avg/Min/Max storage write delay
◉ The total busy time metric tracks the total time there was at least one outstanding I/O per-flow. This metric helps to characterize the ‘busyness’ of a flow relative to other flows.

The hardware Analytics Engine by default tracks SCSI and NVMe I/O metrics at ITL/ITN granularity. However, it can also be programmed to track metrics for various flow granularity of IT, ITL-VMID, ITN-NVMeConnectionID or ITN-NVMeConnectionID-VMID. This gives flexibility in choosing the granularity of metrics and I/O visibility.

The 1GbE analytics port on the Cisco MDS 64G Module can stream the per-flow metrics directly (without involvement of Supervisor) in an ASIC native or standard gPB/gRPC format. This can serve future use-cases that require visibility into micro telemetry events, which would require high frequency telemetry streaming.

Source: cisco.com

Tuesday, 15 February 2022

The SASE story: How SASE came to be, and why it has quickly become the default architecture

Cisco Exam Prep, Cisco Learning, Cisco Career, Cisco Prep, Cisco Guides, Cisco SASE

Secure Access Service Edge (SASE) has quickly become one of the hottest topics related to cloud, networking, and security architectures. As Cisco engineers, we have seen hesitation and confusion among some customers on what SASE really means. We hope to answer most of those questions here.

What is SASE, and how is it related to the Cloud Edge, Zero Trust, and SD-WAN? SASE has positively impacted how we run our IT organization, and how we envision Enterprise IT customers will run theirs. To accurately explain what SASE is, and why SASE came to be, we must look at the evolution of how data is stored and transported within an enterprise.

Our journey started inside the data center

A decade ago, many of us lived in a data Center-centric world, and security was simpler to implement.  Here at Cisco, we were moving data inside the four walls of our data centers, and  we assumed complete trust. The corporate office, the MPLS circuits between sites, and the Cisco data centers were all within a trusted environment, which enabled us to meet our security and compliance requirements.

Cisco Exam Prep, Cisco Learning, Cisco Career, Cisco Prep, Cisco Guides, Cisco SASE

Move to hybrid cloud and hybrid work


However, while many enterprises still focus on data center-centric applications for their core business needs, the world is shifting towards cloud-based application development. This enables faster and more efficient deployment of software and services to meet ever-changing business needs.

IT organizations have also shifted from a model of only managed devices (PC or laptop) for use within the trusted corporate network to allowing users to work on multiple devices from just about anywhere. The emergence of BYOD (Bring Your Own Device) as well as remote work had already been gaining traction in the industry over the past few years, and this trend significantly accelerated with the onset of the COVID-19 pandemic. Now, employees are expected to be able to work from anywhere, and any device. Combined with the distribution of resources across on-prem networks and the cloud, Hybrid Work presents a significant security problem as business users and application providers are no longer fully controlled by the IT organization.

To address security concerns in the interim, network architects designed a model where all user/cloud interactions were routed back, or backhauled, through a data center — i.e. the trusted entity — prior to being redirected to the cloud application. While meeting the security needs, this model has performance and cost challenges.

Arriving at SASE


To improve security and efficiency, a SASE-like architecture was developed internally by Cisco IT. The model we used for the architecture provides every user with a security profile tailored to their access privileges and uses a Zero-Trust approach to identify and authenticate users and devices before allowing a direct connection between the cloud and the access edge.

Ultimately, SASE is the convergence of networking and security functions in the cloud to deliver reliable, secure access to applications, anywhere users work. The Cisco SASE model works by combining SD-WAN for network, with cloud-based security capabilities such as Secure Web Gateway, Firewall as a Service, Cloud Access Security Broker, and Zero Trust Network Access into one, single, integrated cloud service.

CloudPort and the evolution of SASE at Cisco


Cisco’s SASE journey started with CloudPort, which was a hardware-based, on-prem, self-managed Cloud Edge platform, delivered at Colocation data centers around the world. While CloudPort provided a single platform that delivered network and security, it also brought cost challenges, used a traditional perimeter security, and required both agility to scale up/down as well as specialized skillsets.

To address these challenges, we first modernized the on-prem CloudPort solution, and put in motion a plan to move from on-prem to as a service or hosted SASE capabilities. The Customer Zero team, which deploys emerging technology in real life environments to provide critical feedback to the BU early in the product lifecycle, created a strategy to move to SASE, testing do-it-yourself and as-a-service models. The findings from the Customer Zero internal testing have guided our external offering strategy.

During this testing period, Cisco IT has moved from a ‘do-it-yourself’ model to a Cisco hosted/managed solution.

Source: cisco.com

Sunday, 13 February 2022

Moving Towards a Culture of Systemic Software Quality at Cisco

Cisco Prep, Cisco Exam Prep, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs

When software development involves many developers and components, the tools and techniques that are used to maintain software quality need to evolve beyond simply code and test. With bugs still making it into releases, we clearly do not have a foolproof process. So, what will it take to enhance software quality from development to release?

Here are key considerations that go into maintaining software quality.

Beyond Unit Testing

Bugs, or  software defects, are regular part of  software engineering. For smaller projects, it is enough to write the code, put it through some tests, fix any bugs resulting from the tests, and then declare it done. If you are a fan of Test-Driven Development, (TDD) you can do the reverse, where you write the tests first and then write the code to pass the tests.

Both approaches are unit test approaches and can be used to validate that the unit under test performs the function that it was designed to do. Furthermore, if you archive the tests, you have the beginning of a set of regression tests that will allow the developer to validate that any changes made to the unit still allow the unit to function as originally designed.

The development of a strong unit-testing framework is one of the foundations of software quality but this, alone, is not enough to ensure software quality. This type of testing assumes that if the units are working fine, then the sum of the units is working fine. The other issue is that as the number of software units grows, maintaining and running the increased number of tests—that can grow to thousands—becomes an onerous chore.

Tests of Tests

Taking testing to the next level, unit tests move into feature and solution tests. These tests start with a functioning system and then exercise the interfaces from the perspective of an end operator. Configuration changes, different packets, different connecting systems, topologies, and other elements are tested using automated tests that try to ensure that the software works as intended. These tests do a good job of ensuring that what has been tested works, but the runtime and the resources involved can be staggering. It is not uncommon to have to book test runs six months in advance and a run can take a week or two to complete.

Code Analysis

Another aspect of software quality is the software itself. From the bottom up, the code needs to be well written to reduce software defects. Beginning with the assumption that the developer knows what they are doing, the code is inspected by both other developers in code reviews and by automated tools via static analysis. Both are important, but they often suffer from a lack of context. The static analysis tools can only identify  an objective problem with the code. It raises the bar to eliminate language and coding errors, but semantic and contextual details are required to ensure quality.

Code reviews by other developers are invaluable and catch lots of issues. But of all the quality review techniques that are used, they vary the most in efficiency.  A good reviewer can dig through issues, interactions, and problems that automated tools and testing don’t find. But a reviewer who is unfamiliar with the code can do little more than check the style guidelines

Designing for Quality Software

Creating quality code is sometimes not just about translating functional ideas into code. Some quality defects, though avoidable in perfectly written code, are common enough to be a recognized fact in certain environments. For example, when writing in C, there is no memory management, so memory leaks are prevalent in the code. Other programming languages have automatic garbage collection where leaks that show up as memory exhaustion are not an issue.

There are two general approaches to designing quality into software.

The first approach is the more traditional route where explicit software constructs are introduced, and the software is migrated to use them. Introducing standard libraries for common functionality is an obvious approach, but this can be very extensive with entire frameworks being developed to corral the application code to only focus on what is core to its functionality. Another twist on this is using code rewrite tools that will migrate existing applications to new infrastructure.

The second approach is something that the Cisco IOS XE development team has been experimenting with for the past five years and that is to insert structural changes underneath the application code without any changes to the code. This means instrumenting the common point that the code needs to use the compiler, to add the infrastructure changes across the entire code base. The benefit here is that a large amount of code can be changed to a different runtime. The downside is that often the application code has no awareness of a runtime underneath it, which can lead to some surprising behaviors. Since these are compiler instrumented changes, the surprises generally involve the Assembler code not matching the C code.

Quality Framework

All these different quality measures amount to a process that is somewhat like the Swiss cheese model of quality (Figure 1). Only when all layers have failed does an issue get through to the field.

Cisco Prep, Cisco Exam Prep, Cisco Learning, Cisco Certification, Cisco Guides, Cisco Career, Cisco Skills, Cisco Jobs
Figure 1. The Swiss Cheese Model of Software Problem Visibility

The process has accidently evolved into this and there are continual improvements to be made to the system. Additional layers need to be added that ensure quality from different perspectives. Efficiency between the test layers also needs to be improved so that the same tests are not being run in multiple layers. Finally, engineers need to be aware of the interplay between the layers so that they can accurately diagnose and fix issues.

The process by which quality software is delivered to the market continues to evolve. By structuring the process to cover a diverse range of activities—from unit, feature, and solution testing to code reviews by humans, static analysis tools, and quality design frameworks —Cisco IOS XE developers can deliver software that can reliably run enterprise networks around the world.

Source: cisco.com

Saturday, 12 February 2022

300-810 CLICA: Pass Cisco CCNP Collaboration Exam in First Attempt

 

Cisco CLICA Exam Description:

The Implementing Cisco Collaboration Applications v1.0 (CLICA 300-810) exam is a 90-minute exam associated with the CCNP Collaboration and Cisco Certified Specialist - Collaboration Applications Implementation certifications. This exam tests a candidate's knowledge of collaboration applications, including single sign-on, Cisco Unified IM and Presence, Cisco Unity Connection, Cisco Unity Express, and application clients. The course, Implementing Cisco Collaboration Applications, helps candidates to prepare for this exam.

Cisco 300-810 Exam Overview:

Related Articles:-

  1. Cisco 300-810 CLICA Practice Tests- A Smart Way of Preparation
  2. Grab Chance to Boost Your Career with Cisco 300-810 CLICA Exam with Practice Test

“Powering Hybrid Work” in Financial Services

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Skills, Cisco Jobs

The question that I get asked most often by financial services CXO’s is “how do we move beyond just ‘supporting’ Hybrid Work to ‘powering’ Hybrid Work with the right technology stack so that we can address the challenges of attracting and engaging an evolving workforce and keep the organization moving forward in an agile and sustainable way.”

Throughout the pandemic, financial services firms have been prioritizing health and safety of their employees by implementing hybrid work whilst abiding by guidelines and regulations. However, not everyone has had success with their “hybrid work” deployments. Those that have got it right to some extent are realizing the benefits

A large number of financial services firms have struggled to implement “an optimum workable hybrid work model”. The challenge is they have tried to retrofit “remote work implementations” with technology upgrades and add-on’s as guided by their many different technology partners.

Hybrid Work in the context of financial services can be defined as an employee centric, business transformative approach that designs the work experience around and for the employee, wherever they are. It empowers employees to work onsite, offsite, and move between locations with uniform access to all the business tools and resources in a highly secure, compliant and efficient manner thus promoting inclusiveness, engagement, and well-being for all employees while driving employee performance, business productivity and talent retention.

While a future-proofed technology stack is a critical pillar of the hybrid work model, getting Hybrid Work to work also requires reimagining current and emerging operating models and optimizing them such that employee engagement, experience and well-being is enhanced while financial services delivery just keeps getting better with more delighted customers.

Financial services firms that have their operating models reimagined/transformed to support the hybrid work model have the first mover advantage of becoming fully resilient businesses, ready to weather any storm.

A “Hybrid Work Powered” operating model for financial services firms should at the least have the  following 5 characteristics :

Cisco Prep, Cisco Exam Prep, Cisco Certification, Cisco Learning, Cisco Guides, Cisco Skills, Cisco Jobs
1. INCLUSIVE – offering equal experiences for everyone. Enables firms to provide a work environment where every employee can participate fully and be seen and heard equally.

2. FLEXIBLE – adapting to any work style, role, and environment. Enables employees spread across different office locations, types (home etc.), time zones and even countries, working at different hours have access to flexible tools that can address their different needs while adapting to their work styles, roles, and devices.

3. SUPPORTIVE – focusing on safety, empathy, and well-being.  Enables firms to promote a supportive mindset throughout every level of the organization thus ensuring that employees are comfortable with ways of working and feel safe, secure, supported, included, and cared.

4. SECURE – being secure by design, private by default.  Enables employees to have worry-free access to reliable and secure connectivity and secure app experiences thus ensuring all team members can work and collaborate with confidence anywhere they choose to work and have consistent, uninterrupted access to the required applications.

5. MANAGED – delivering modern infrastructure, frictionless administration. Enables IT teams to operate and manage the complex and dynamic hybrid work environment, using an approach known as full-stack observability which delivers optimized user experiences and enhanced enterprise technology management.

To get “hybrid work to work”, financial services firms need to reimagine/transform their operating models to deliver the key characteristics mentioned earlier and not just depend on “retrofitting” their existing IT stacks with hybrid work enabled “siloed” products.

Investing in a “future-proofed hybrid work technology stack” such as Cisco’s “secure-by-design*” Hybrid Work Solution Technology Stack enables financial services firms to reimagine/transform their operating model thus moving past “supporting” to “powering” Hybrid Work in a highly secure and compliant manner by empowering workers to work from anywhere, at home or in the office while also providing a positive outcome for every business sponsor and stakeholder (HR, Facilities, IT etc.) who are involved in defining and implementing the financial services firms hybrid work strategy.

Source: cisco.com

Thursday, 10 February 2022

Continuous value delivered with new Cisco SD-WAN innovations

IT teams need agile delivery to keep pace with business demands. Today, enterprises are in the process of transitioning to a hybrid workforce, another rapid pivot that requires agile delivery. It’s essential to adapt to the new paradigm and in doing so, seek to minimize costs while still improving productivity, security, and the user experience. Cisco software platforms, like Cisco SD-WAN provide continuous value with new capabilities enabled in software.

Our latest Enterprise Networking release helps with this transition to hybrid work and provides value with innovations that provide greater integration that can reduce OpEx and CapEx spending and simplify operations. See the details below on new features in this release to help your IT team increase business agility and deliver more value for your organization.

First Cloud OnRamp for SaaS to optimize Webex experience

To improve and enhance the user experience for organizations, in our latest release (17.7) we are announcing Cisco SD-WAN Cloud OnRamp for SaaS integration with Webex. Cisco SD-WAN is the first solution to provide this level of integration and automation.

Cisco enables users to optimize Webex connectivity and performance when using Cisco SD-WAN. It does this by continuously monitoring all possible paths to Webex, and intelligently routing cloud application traffic to the best performing path, providing a fast, secure, and reliable end-user experience – and without human intervention.

The ultimate value for the users is that Cloud OnRamp for SaaS delivers path optimization and policy automation for Webex, so enterprises will be able to deliver a better application experience for their customers and employees.

Simplify CUBE functionality embedded in routers with Cisco SD-WAN

The new release enables native Cisco Unified Border Element (CUBE) support on Cisco enterprise routing platforms. CUBE is an enterprise-class Session Border Controller (SBC) performing critical voice routing, security, interworking and session management functions. Supported platforms include: ISR 4000, ISR 1100, and Catalyst 8200 as well as other ASR models.

The integration of this functionality into Cisco SD-WAN empowers customers to leverage the edge platforms to route collaboration application traffic between SD-WAN enabled nodes either within an enterprise (for on-prem deployments) or private / public cloud-based solutions. Customers can enable SBC functionality on their existing SD-WAN platforms allowing them to consolidate capabilities into a single platform, eliminating the need for an additional appliance. This integration reduces the number of platforms to purchase, license, power and manage; simplifies network architecture; and lowers costs and complexity.

Ease operations with vManage Enhanced UX for Network Monitoring

Cisco is introducing enhanced vManage UX capabilities that enables IT managers and network operators to centrally automate the entire SD-WAN fabric, all in a highly visualized and intuitive user experience.

Cisco SD-WAN, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Prep, Cisco Jobs, Cisco Tutorial and Materials

vManage is the single centralized dashboard for Cisco SD-WAN, addressing traditional challenges associated with device configuration, network management, and network monitoring with automation. It offers a highly visualized and intuitive user interface that simplifies and expedites network management and monitoring of SaaS, IaaS, and security for network operators.

vManage offers the following advantages:

◉ Intuitive user interface for easy consumption.

◉ Highly visualized network monitoring.

◉ Pre-configured templates automate and expedite the deployment of most common use cases.

◉ Guided step-by-step configuration designed to intelligently expedite onboarding of new devices.

◉ Expedite the Cisco ThousandEyes agent deployment for enhanced visibilities into internet, cloud, and SaaS

◉ Migrate to a SASE architecture with Cisco Umbrella

Greater reliability and resiliency with Cisco Integrated Services Router 1131 

Cisco SD-WAN, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Prep, Cisco Jobs, Cisco Tutorial and Materials

Cisco Integrated Services Router 1131 with WiFi-6 and 5G pluggable interface module

Cisco is introducing the next iteration of Cisco Integrated Service Router (ISR) optimized for cloud connectivity with built-in Wi-Fi 6 and pluggable 5G support for enhanced connectivity.

Built-in Wi-Fi 6 adds additional flexibility and scalability to existing networks, and pluggable 5G technology can provide greater reliability and resiliency. There is also support for full-stack security, including application aware firewall, IPS, URL-filtering, AMP, and Thread Grid.​

SD-WAN has evolved beyond simply connecting users at the campus to applications in the datacenter. The value of network connectivity is the lifeblood of any enterprise today. The ability to connect users reliably and securely across multicloud, branch, datacenters, and hybrid workforce becomes a critical success factor to any organization.

Source: cisco.com

Tuesday, 8 February 2022

What DevSecOps Means for Your CI/CD Pipeline

The CI/CD (Continuous Integration/Continuous Deployment) pipeline is a major ingredient of the DevOps recipe. As a DevSecOps practitioner, you need to consider the security implications for this pipeline. In this article, we will examine key items to think about when it comes to DevSecOps and CI/CD.

The type of CI/CD pipeline you choose—whether it’s managed, open source, or a bespoke solution that you build in-house—will impact whether certain security features are available to you out of the box, or require focused attention to implement.

Let’s dive in

Secret management for your CI/CD pipeline

Your CI/CD pipeline has the keys to the kingdom: it can provision infrastructure and deploy workloads across your system. From a security perspective, the CI/CD pipeline should be the only way to perform these actions. To manage your infrastructure, the CI/CD pipeline needs the credentials to access cloud service APIs, databases, service accounts, and more—and these credentials need to be secure.


Managed or hosted CI/CD pipelines provide a secure way to store these secrets. If you build your CI/CD solution, then you’re in charge of ensuring secrets are stored securely. CI/CD secrets should be encrypted at rest and only decrypted in memory, when the CI/CD pipeline needs to use them.

You should tightly lock down access to the configuration of your CI/CD pipeline. If every engineer can access these secrets, then the potential for leaks is huge. Avoid the temptation to let engineers debug and troubleshoot issues by using CI/CD credentials.

Some secrets (for example, access tokens) need to be refreshed periodically. CI/CD pipelines often use static secrets—which have much longer lifetimes, and so don’t need regular refreshing—to avoid the complexities of refreshing tokens.

Injecting secrets into workloads


Cloud workloads themselves also use secrets and credentials to access other resources and services that their functionality depends on. These secrets can be provided in several ways. If you deploy your system as packages using VM images or containers, then you can bake the secrets directly into the image, making them available in a file when the workload runs.

Another approach is to encrypt the secrets and store them in source control. Then, inject the decryption key into the workload, which can subsequently fetch, decrypt, and use the secrets.

Kubernetes allows for secrets that are managed outside of the workload image but exposed as an environment variable or a file. One benefit of secrets as files is that secret rotation doesn’t require re-deploying the workload.

Infrastructure as code: a security perspective


Infrastructure as code is not only an operational best practice; it is also a security best practice. 

software systems = infrastructure + workloads

When ad hoc changes are made to infrastructure configurations, this drift can introduce security risks. When resources are provisioned without any auditing or governance, it becomes difficult to maintain proper security measures across all resources.

Manage your infrastructure just like you manage your code. Use declarative configurations (like those of  Terraform, AWS CloudFormation, or Kubernetes CRDs). Review and audit every change.

Bring your own security tools


CI/CD pipelines are flexible. Generally speaking, they let you execute a sequence of steps and manage artifacts. The steps themselves are up to you. As a security engineer, you should take advantage of the security tools that already exist in your environment (especially in the cloud). For example, GitHub and GitLab both scan your commits for the presence of secrets or credentials. Some managed CI/CD solutions build in API scanning or application security scans. However, you may also prefer to add tools and checks into the mix.

You could also add static code analysis (like SonarQube) to ensure that code adheres to conventions and best practices. As another example, you mayincorporate vulnerability scanning (like Trivy or Grype) to your CI/CD pipeline, checking container images or third-party dependencies for security flaws.


Comprehensive detection and response


Application observability, monitoring, and alerting are fundamental DevOps Day 2 concerns. Although your CI/CD pipeline is not directly involved in these activities, you should use your CI/CD pipeline to deploy the security tools you use for these purposes. From the point of view of the CI/CD pipeline, these are just additional workloads to be deployed and configured.

Your CI/CD pipeline should include early detection of security issues that trigger on every change that affects workloads or infrastructure. Once changes are deployed, you need to run periodic checks and respond to events that happen post-deployment.

In case of faulty CI/CD, break glass


The CI/CD pipeline is a critical part of your system. If your CI/CD is broken or compromised, your application may continue to run, but you lose the ability to make safe changes. Large scale applications require constant updates and changes. If a security breach occurs, you need to be able to shut down and isolate parts of your application safely.

To do so, your CI/CD pipeline must be highly available and deployed securely. Whenever you need to update, rollback, or redeploy your application, you depend on your CI/CD pipeline.

What should you do if your CI/CD pipeline is broken? Prepare in advance for such a case, determining how your team and system will keep operating (at reduced capacity most likely) until you can fix your CI/CD pipeline. For complicated systems, you should have runbooks. Test how you will operate when the CI/CD is down or compromised.

Source: cisco.com

Sunday, 6 February 2022

Cisco Industrial Ethernet, speaking the language

I detailed the robust hardware design of our Industrial Ethernet switches that enables them to withstand harsh environments. In this blog, I will focus on their software features – particularly the support of industrial communications protocols – further cementing the “purpose” in these purpose-built products.

Cisco’s IE (Industrial Ethernet) switches are designed to leverage as much of Cisco’s technology as possible. This includes hardware and software features. Customers expect our software features to behave consistently across product families, including the IE Switching products. Cisco IE switches also run IOS or IOS-XE. There are differences. One difference between Enterprise and Industrial Switching is support for industrial protocols.

What’s a protocol? Protocols define the set of rules by which devices communicate with each other. The internet runs on a protocol referred to an IP. Industrial communications have been using protocols since before IP became as popular as it is today. Every industry seems to have its own set of protocols. Cisco IE switches support a vast majority of these protocols enabling them to be part of any industrial networking solution.

Protocol support is one of many reasons that makes Cisco IE Switching popular

Our Industrial Ethernet (IE) switches are the global market leader in Industrial Ethernet Switching for several reasons:

1. Offer a portfolio of din rail and rack mount of industrialized switching products to fit multiple use cases

2. Have a high quality, built-to-purpose ruggedized hardware design for reliability in industrial deployments

3. Leverage Cisco’s network management and security technologies

4. Support of protocols that enables industrial customers to easily incorporate Cisco’s networking products into their deployments and solutions.

Why support industrial protocols?

In short, because these protocols are vital for the functionality of any modern industrial operation.

Cisco builds network devices to be deployed in a wide variety of industrial networks and solutions. No two industrial networks are alike. There’s a wide variety of requirements and use cases. And there is at least one protocol used in every industrial solution. The networking infrastructure must support all requirements, use cases, and protocols, no matter what they are.

We like to think we don’t have a technology bias. While focusing on our key competency, networking, we will build what customers want to deploy. This applies as well to support of industrial protocols. We are not promoting or supporting one industrial protocol over another. We are not bound to any one technology, solution set, or protocol.

End users, system integrators, and anyone putting together a solution want to use the tools and applications they know and trust and at the same time take advantage of the state-of-the-art networking technology. To enable the tools and applications used in industrial deployments, our IE switches support industrial protocols used to build solutions based on Ethernet networks. Failure to support an industrial protocol often eliminates a networking product as a viable option.

What does support for industrial protocol mean?

Just like the varied industries and protocols, support any one protocol means different things. Protocols differ widely, thus support does too. If you must have a single definition of support, support for an industrial protocol can be equated to ‘speaking the language’. Our IE switches support the communication of industrial protocols enabling end devices to communicate effectively and efficiently.

Cisco Prep, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guide, Cisco Skills
Figure 1: communication flow through the Cisco IE switch

PROFINET and Ethernet/IP CIP are two protocols commonly used by industrial automation and control systems (IACS). Our IE switches are certified compliant to these two protocols by including the software stacks for them. It’s the same software stack as the IACS components. For PROFINET and Ethernet/IP, the Cisco IE switches really do speak the language. Applications using PROFINET or Ethernet/IP can discover and automatically setup Cisco IE switches as a part of the solution, thus avoiding manual procedures.

For other protocols, support may mean recognition. GOOSE is a good example of such a protocol. Our IE switches do not need to support the GOOSE software stack. Protocols such as GOOSE use Layer2 Ethernet or Layer3 Internet protocol. Users can build quality of service policies to prioritize the communication of these protocols. The Cisco IE switch can recognize and prioritize industrial protocol running on standard Ethernet or Internet protocol based messages in the network ensuring end to end quality of service. Regardless of the interaction and support for industrial protocols, Cisco IE switches provide fast, reliable, and secured transport.

What about safety protocols?

Especially safety protocols. Such as PROFIsafe, and CIP Safety.

Support for any protocol implies support for the safety portion of the protocol. Industrial automation (eg: manufacturing) solutions prioritize support for safety protocols. If a protocol has a safety component, then our IE switches support the safety protocol. Most of the time this means recognizing the protocol or the safety messages in the protocol and building a quality-of-service policy to prioritize the communication end to end.

What about Cyber Vision?

Cisco Cyber Vision is an application that runs on Cisco IE switches and uses deep packet inspection to analyze all traffic passing through the switch and identify industrial protocols in use.

Cyber Vision does more than ‘speak the language’. Using its knowledge of industrial protocols, Cyber Vision can identify industrial assets and determine if the payload in these protocols is within operating bounds. It also provides security posture assessments of IACS components.

The application running on our IE switches report a summarized version of its findings to the Cyber Vision center, where end users get a real-time visual representation of all the communications on their operational network.

The figure below is an example of how Cyber Vision enables users to visualize communication between devices. It recognizes which device is speaking which protocol amongst other things.

Cisco Prep, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guide, Cisco Skills
Figure 2: Cybervision visualizes device communication flows

Cyber Vision is a security tool to increase visibility into operational networks. You can’t secure what you can’t see.

Closing

Ultimately, it’s about giving you, the customer, what you want and what you need. You want the latest and greatest technology because you’re investing for the long term. You want quality, which is why you’ve chosen IE switches from Cisco.  You want ease of use. You want to build systems and solutions with the tools you know, trust and which you already have invested in.

Failure to provide any of the above means the customer must compromise. Nobody wants that. With Cisco IE Switching, you don’t have to.

Appendix

Brief Description of select industrial protocols (with examples)

If you’re new to industrial networking, you can find a brief overview of the main industrial protocols below.

Why so many protocols? Different industries have different protocols they have developed over the years to meet their needs. Most industrial protocols leverage the Internet protocol (IP) for communication. But not always.

Table summarizing a few industrial protocols (not exhaustive)

Cisco Prep, Cisco Learning, Cisco Exam Prep, Cisco Career, Cisco Guide, Cisco Skills

Source: cisco.com

Thursday, 3 February 2022

What does neuroscience have to do with the Internet?

It’s time for a new approach to the Internet

The Internet has never been so dynamic in more than 40 years of existence. The massive adoption of cloud has paved the way for scores of SaaS applications being hosted “on the Internet.” At the same time, organizations are making hybrid work a part of their strategy moving forward. As a result, employees working from home expect the same level of security and application experience as they have at the enterprise campus. In turn, IT organizations are routing much more of their corporate data across the Internet extending into multiple clouds. And of course, connectivity continues to evolve with traditional link types but also with the rollout of 5G and other new Satellite links.

In such a highly distributed and dynamic environment it becomes extremely hard to keep up by using only traditional and reactive approaches. But just as the complexity of our networking environments is increasing, so also, we can take advantage of recent innovations in cloud, compute, and data aggregation capabilities to improve the Internet.

Reactive measures don’t go far enough

For its entire existence, as a networking industry we’ve pretty much applied the same reactive approach each time an Internet failure occurs. By reactive I mean that we wait until the Internet breaks (path failure) and then reroute traffic along an alternate path (using IGP, MPLS/IP Fast Reroute, etc.). This approach of protection and restoration relies heavily on fast detection of failure followed by rerouting traffic. While a reactive approach is effective and necessary, it’s far less than ideal. The problem is that our processes never learn from any of the previous failures. So, in effect, the same issues could repeat themselves over and over, requiring the same fixes. But consider the possibilities when we tap the power of AI/machine learning and statistical modeling and apply predictive analytics to the Internet to avoid incidents before they occur.

Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs

Enabling the Internet with learning capabilities


Surprisingly, we have never enabled the Internet with learning capabilities! A plethora of technologies have been designed and deployed, capable of fast reaction, adapting to changing conditions but without any learning capability (except for quick adaptations after detecting issues such as with TCP windowing, routing convergence, and route dampening to mention a few).

So, what would a learning Internet look like?

Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs
How many times have you heard people comparing the brain to a computer? And that AI engineers are trying to mimic the human brain? There is too much to cover in a blog but let me share some thoughts. First, the brain is a network of networks and in this way, the Internet shares similarities with it. [Sidenote: stay tuned for a white paper on that subject I will publish with a famous neuroscientist in 2022—Adeel Razi.] Of course, we know that part of what makes us human is cognition and consciousness along with our ability to learn.

Cisco Exam Prep, Cisco Preparation, Cisco Career, Cisco Skills, Cisco Jobs
Second, we learn as our brain builds a model of the world (there are multiple theories on the learning models, with even some poorly understood capabilities of one-shot learning), makes use of sensing (vision, audition, touch, etc.) to adapt and learn, but also perform higher order planning functions in the prefrontal cortex (PFC). What if we enabled the Internet with learning “the same way” our brains learn? The Internet can use models (statistical /machine learning), sensing (telemetry), and self-healing (planning). By enabling the Internet with the ability to learn and predict, we can take preventative actions alongside traditional reactive measures for a more comprehensive approach.

What a predictive Internet would look like


Science fiction? Not at all. Although being able to replicate the brain’s ability to predict is far from being possible with today’s AI technologies, enabling the Internet with the ability to learn is already here.

At Cisco we have been working on the Predictive Internet for over two years, starting with a deep analysis of millions of paths seeking for signals that could be used by an ML/AI engine to learn and predict. And no, there is no magic “algorithm” but rather a plethora of technologies for telemetry processing and training models to then learn and predict. Our Predictive engine is now capable of predicting short- and long-term events thus avoiding issues before they happen. There is no magic there, simply being able to learn and apply more than a decade of ML/AI product developments that perform with very high accuracy, at scale.

Could a Predictive engine predict all issues? Not at all …. but the engine has been tuned to predict as many events as possible with extremely high accuracy. More soon …

Source: cisco.com

Tuesday, 1 February 2022

Application-centric Security Management for Nexus Dashboard Orchestrator (NDO)

Cisco Nexus Dashboard Orchestrator (NDO), Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides

Nexus Dashboard Orchestrator (NDO) users can achieve policy-driven Application-centric Security Management (ASM) with AlgoSec

Cisco Nexus Dashboard Orchestrator (NDO), Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Prep, Cisco Preparation, Cisco Guides
AlgoSec ASM A32 is AlgoSec’s latest release to feature a major technology integration, built upon a well-established collaboration with Cisco — bringing this partnership to the front of the Cisco innovation cycle with support for Cisco Nexus Dashboard Orchestrator (NDO) allows Cisco ACI – and legacy-style Data Center Network Management – to operate at scale in a global context, across data center and cloud regions. The AlgoSec solution with NDO brings the power of intelligent automation and software-defined security features for ACI, including planning, change management, and micro-segmentation, to global scope. There are multiple use cases, enabling application-centric operation and micro-segmentation, and delivering integrated security operations workflows. AlgoSec now brings support for EPG and Inter-Site Contracts with NDO, boosting their existing ACI integration.

Let’s Change the World by Intent

Since its 2014 introduction, Cisco ACI has changed the landscape of data center networking by introducing an intent-based approach, over earlier configuration-centric architecture models. This opened the way for accelerated movement by enterprise data centers to meet their requirements for internal cloud deployments, new DevOps and serverless application models, and the extension of these to public clouds for hybrid operation – all within a single networking technology that uses familiar switching elements. Two new, software-defined artifacts make this possible in ACI: End-Point Groups (EPG) and Contracts – individual rules that define characteristics and behavior for an allowed network connection.

ACI Is Great, NDO Is Global

That’s really where NDO comes into the picture. By now, we have an ACI-driven data center networking infrastructure, with management redundancy for the availability of applications and preserving their intent characteristics. Using an infrastructure built on EPGs and contracts, we can reach from the mobile and desktop to the datacenter and the cloud. This means our next barrier is the sharing of intent-based objects and management operations, beyond the confines of a single data center. We want to do this without clustering types, that depend on the availability risk of individual controllers, and hit other limits for availability and oversight.

Instead of labor-intensive and error-prone duplication of data center networks and security in different regions, and for different zones of cloud operation, NDO introduces “stretched” EPGs, and inter-site contracts, for application-centric and intent-based, secure traffic which is agnostic to global topologies – wherever your users and applications need to be.

Having added NDO capability to the formidable, shared platform of AlgoSec and Cisco ACI, region-wide and global policy operations can be executed in confidence with intelligent automation. AlgoSec makes it possible to plan for operations of the Cisco NDO scope of connected fabrics to be application-centric and enables unlocking the ACI super-powers for micro-segmentation. This enables a shared model between networking and security teams for zero-trust and defense-in-depth, with accelerated, global-scope, secure application changes at the speed of business demand — within minutes, rather than days or weeks.

Key Use Cases

Change management — For security policy change management this means that workloads may be securely re-located from on-premises to public cloud, under a single and uniform network model and change-management framework — ensuring consistency across multiple clouds and hybrid environments.

Visibility — With an NDO-enabled ACI networking infrastructure and AlgoSec’s ASM, all connectivity can be visualized at multiple levels of detail, across an entire multi-vendor, multi-cloud network. This means that individual security risks can be directly correlated to the assets that are impacted, and a full understanding of the impact by security controls on an application’s availability.

Risk and Compliance — It’s possible across all the NDO connected fabrics to identify risk on-premises and through the connected ACI cloud networks, including additional cloud-provider security controls. The AlgoSec solution makes this a self-documenting system for NDO, with detailed reporting and an audit trail of network security changes, related to original business and application requests. This means that you can generate automated compliance reports, supporting a wide range of global regulations, and your own, self-tailored policies.

The Road Ahead

Cisco NDO is a major technology innovation and AlgoSec and Cisco are delighted and enthusiastic about our early adoption customers. Based on early reports with our Cisco partners, needs will arise for more automation, which would include the “zero-touch” push for policy changes – committing EPG and Inter-site Contract changes to the orchestrator, as we currently do for ACI and APIC. Feedback will also shape a need for automation playbooks and workflows that are most useful in the NDO context, and that we can realize with a full committable policy by the ASM Firewall Analyzer.

Source: cisco.com