Thursday, 14 May 2020

How do you gauge software quality before deployment?

Business leaders often question software development processes to identify their effectiveness, validate if release quality is maintained across all products and features, and to ensure smooth customer deployments. While providing data from multiple perspectives, I hear teams struggling to respond in a meaningful way. In particular, it’s hard for them to be succinct without communicating nitty-gritty details and dependencies. Is there a way in which we can objectively arrive at the release quality measurement to ensure an expected level of quality? Absolutely!

With more than two decades in the industry, I understand the software development life cycle thoroughly, its processes, metrics, and measurement. As a programmer, I have designed and developed numerous complex systems, led the software development strategy for CI/CD pipelines, modernized processes, and automated execution. I have answered the call for stellar customer journey analytics for varied software releases, allowing our business to grow to scale. Given my background, I would like to share my thoughts on our software development process, product and feature release quality, and strategies to prepare for successful product deployments. I look forward to sharing my opinions and collaboratively working with you on building customer confidence through high-quality software deployments.

But, before I begin, here are some terms the way I think of them:

◉ Product Software Release— Mix of new and enhanced features, internally or customer-found defect fixes, and may contain operational elements related to installations or upgrades.

◉ Software Release Quality— Elements like content classification, development and test milestones, quality of the code and test suites, and regressions or collateral to track release readiness prior to deployment.

◉ Release Content— Classified list of features and enhancements with effort estimations for development. For example, we may use T-shirt size classification for development efforts (including coding, unit testing, unit test found bug fixes, and unit test automation): Small (less than 4 weeks), Medium (4-8 weeks), Large (8-12 weeks), Extra-Large (12-16 weeks). Categorize feature testing similarly as well.

◉ Release Quality and Health— Criteria for pre-customer deployment quality, with emphasis on code and feature development processes, corresponding tests, and overall release readiness.

Through this lens, let’s view the journey of our Polaris release. Before we do, let me emphasize that quality can never be an afterthought, it has to be integral to the entire process from the very beginning. Every aspect of software development and release logistics require you to adopt a quality-conscious culture. I believe there are four distinct phases or checkpoints to achieve this goal:

Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Exam Prep

◉ Release content, execution planning and approvals— During this phase we must get our act straight. Good planning will yield great results. Preempting issues and executing on a mitigation plan is critical. Adopt laser-sharp focus on planning for features that will be developed and tested. To be effective, we must allocate a 70:20:10 ratio for complex and large features. Seventy percent of the challenging features will have to be developed first and tested early in the release cycle, twenty percent of them will be addressed in the next cycle, and ten percent at the end of the cycle. Small, medium and test-only features should be distributed throughout the development cycle depending on resource availability. In this way, the majority of the completed code can be tested early in the process and in parallel! This will help us drive shift left best practices and make them integral to the culture of our organization.

Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Exam Prep

◉ Phase containment, schedules, and quality tracking—This phase represents the core of execution. We need to build a framework for success to guarantee quality. The key is to develop fast, tackle the complex stuff early, and to allow ample time for soak testing. Build the metric and measurement around it. Phase containment is essential for success. During this phase, focus on development and design issues, automation, code coverage, code review, static analytics, code complexity, and code churn data analytics to help build quality. Build the metrics and measurement around these elements and adhere to development principles. If any features do not meet the schedule or quality checkpoints, we must be prepared to defer them and remove them from the release train. The quality metrics should include, the number of features that have met their development schedule, undergone the feature/functional tests with 100% execution, and can claim a 95% pass-rate! If we follow an agile development model, each developmental and validation task must be tracked per sprint. We must document the unit-test found defects at the end of the sprint cycle; especially, if they move from one sprint cycle to the next. Daily defect tracking and weekly review with executives will bring the required attention and visibility as well.

The following image illustrates one such scenario:

Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Exam Prep

◉ Testing and defect management convergence— This phase can make or break a release. Since development is complete and a certain level of quality has been achieved (though not quite release-ready) it entails more rigorous testing. Tests, such as system integration testing, solutions and use case-driven testing, and performance and scale testing, provide greater insight into the quality of the release. Use time effectively in this phase to track the test completion percentages, the pass-rate percentages, and your metrics surrounding defect management. Defect escape analysis testing will highlight developmental gaps while making for a good learning opportunity. Another invaluable metric is to study the trend of incoming defects. If things are working as they should, you will notice a steady decline each week. The incoming defect rate must decline towards the end of testing cycle! This is a key metric as well.

◉ Ready, set, go— In this phase, embark on a stringent review of readiness indicators, carry out checks and balances, address operational issues, finalize documentation content, and prepare for final testing. Testing may entail alpha, canary, early field trials in a real customer environment, or tests in a production environment to uncover residual issues. This phase will provide an accurate insight into the quality of the release.

Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Exam Prep

As you can see, there are many ways we can equip ourselves to measure the quality and the health of a release. Building a process around developing the quality code and discipline in managing the phase containment are the key ingredients. It is important to build a culture to track the progress of shift left initiatives, focus on code quality and schedule discipline. Best of all, data-driven analytics and metrics will empower us to answer all queries from executives with confidence!

Wednesday, 13 May 2020

Real Users Speak: Cisco and the Elements of Robust Email Security

Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Exam Prep

What does it take to implement robust email security? According to users of Cisco Email Security (ESA) on IT Central Station, it takes a combination of distinctive elements in an email security solution to attain this goal. These include sophisticated filtering, built-in intelligence and policy definition and enforcement capabilities. The system should also be easy to use.

Real users share their unbiased opinions on what makes Cisco Email Security the #1 ranked product in IT Central Station’s Messaging Security category.

Filtering out spam and phishing messages


Companies worry about employees clicking on malicious links in phishing emails or getting deluged with bogus spam messages. Indeed, the attack chain for a great many data breaches and ransomware attacks starts with an email to an unsuspecting person.  Effective email filtering is thus a compelling feature for an email security solution. IT Central Station members expressed this opinion.

For example, Michael L., a Network Security Engineer at Konga Online Shopping Ltd., a retailer with over 1,000 employees, acknowledged that Cisco Email Security Appliance (ESA) “helped with mail filtering and load balancing between Exchange servers.” In particular, he singled out Cisco Email Security because, as he said, “Cisco Email Security enabled us to blockade domains that send these emails. Cisco Email Security gave us fantastic service. The filtering is something I found very valuable.”

“Initially, the most valuable feature for us was the SenderBase Reputation,” said a Regional ICT Security Officer at an energy/utilities company with over 10,000 employees. He added that it “reduced the number of emails that were even considered by the system by a huge number, before we ended up processing them to get through the spam, the marketing, and the virus-attached emails. Since then, customized filtering has been very effective and useful for us.”

A Security Engineer at an energy/utilities company similarly remarked, “We have seen ROI. Only 70 percent of phishing and bad emails are getting through. There are very few solutions that boast this percentage of filtering. This level of filtering helps our company. The most valuable features are Advanced Malware Protection, URL filtering, and of course Reputation Filtering.”

Built-in intelligence


The volume and variety of email translates into a need for security that’s augmented by machine intelligence. Cisco ESA users spoke to this ability, with John A., a Network Security Engineer at a small tech services company, noting that, “Cisco was scanning our emails with their own intelligence. I liked that.” An Information Security Analyst at a healthcare company also commented on Cisco ESA’s Intelligent Multi-Scan (IMS) engine, saying “it does a good job, right out-of-the-box, of blocking the vast majority of things that should be blocked.”

Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Exam Prep

For the energy company Regional ICT Security Officer, built-in email security intelligence came in the form of Talos. As he put it, “Instead of just specifically stopping known spam sources and using that to stop virus-infected emails, the Talos solution which they’re now providing has a lot of attraction because it helps to prevent phishing emails.”

Policy enforcement


IT Central Station members addressed the issue of security policy definition and enforcement as an element of strong email security. As Keith K., a Senior Email Engineer at a legal firm with over 1,000 employees explained, “We use it [Cisco ESA] for different policies or as another scanning engine, e.g., on the desktop or for data coming through another email gateway.” He added, “The most valuable feature is the policies or rules that you can put on it. This definitely helps with routing specific things to different destinations within our organization, or even potentially blocking when something is coming in and out.”

Setu S., a System Administrator at a financial services firm with over 1,000 employees echoed this sentiment, sharing that his team uses Cisco Email Security for “customized policies based on our security measures using this tool to scan the emails in our inboxes.” He noted, “We also check all incoming emails. Because we can customize policies with it, we have good documentation.”

Ease of use


Email security is challenging enough that security professionals prefer solutions that are easy to use. In this context, Mir A., a Network Engineer at a hospitality company with over 10,000 employees, observed that Cisco ESA “was really easy to implement.” As he said, “Even a newcomer joining the company could easily implement it.” John A found that “anybody could use it. You don’t have to be familiar with IT to be able to handle navigating it. The deployment was quite easy.”

Cisco Tutorial and Material, Cisco Guides, Cisco Certification, Cisco Exam Prep

This user also noted, “GUI is self-explanatory: If you want to block emails, you want to erase emails, you do the IP address configuration and what your DNS is.” The healthcare Information Security Analyst said, “Black-listing and white-listing are highly intuitive and easy to do.”

Tuesday, 12 May 2020

Running Cisco Catalyst C9800-CL Wireless Controller in Google Cloud Platform

When I heard that the Cisco Catalyst 9800 Wireless Controller for Cloud was supported as an IaaS solution on Google Cloud with Cisco IOS-XE version 16.12.1, I wanted to give it a try.

Built from the ground-up for Intent-based networking and Cisco DNA, Cisco Catalyst 9800 Series Wireless Controllers are Cisco IOS® XE based, integrate the RF excellence of Cisco Aironet® access points, and are built on the three pillars of network excellence: Always on, Secure and Deployed anywhere (on-premises, private or public Cloud).

I had a Cisco Catalyst 9300 Series switch and a Wi-Fi 6 Cisco Catalyst 9117 access point with me. I had internet connectivity of course, and that should be enough to reach the Cloud, right?

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I was basically about to build the best-in-class wireless test possible, with the best Wi-Fi 6 Access Point in the market (AP9117AX), connected to the best LAN switching technology (Catalyst 9300 Series switch with mGig/802.3bz and UPOE/802.3bt), controlled by the best Wireless LAN Controller (C9800-CL) running the best Operating System (Cisco IOS-XE), and deployed in what I consider the best public Cloud platform (GCP).

Let me show you how simple and great it was!

(NOTE: Please refer to Deployment Guide and Release Notes for further details. This blog does not pretend to be a guide but rather to share my experience, how to quickly test it and highlight some of the aspects in the process that excited me the most)

The only supported deployment mode is with a managed VPN between your premises and Google Cloud (as shown in previous picture). For simplification and testing purposes, I just used public IP address of cloud instance to build my setup.

Virtual Private Cloud or VPC

GCP creates a ‘default’ VPC that we could have used for simplicity, but I rather preferred (and it is recommended) to create my specific VPC (mywlan-network1) for this lab under this specific (C9800-iosxe-gcp) project.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I did also select the region closest to me (europe-west1) and did select a specific IP address range 192.168.50/24 for GCP to automatically select an internal IP address for my Wireless LAN Controller (WLC) and a default-gateway in that subnet (custom-subnet-eu-w1).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

A very interesting feature in GCP is that network routing is built in; you don’t have to provision or manage a router. GCP does it for you. As you can see, for mywlan-network1, a default-gateway is configured in 192.168.50.0/24 (named default-route-c091ac9a979376ce for internal DNS resolution) and a default-gateway to internet (0.0.0.0/0). Every region in GCP has a default subnet assigned making all this process even more simple and automated if needed.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Firewall Rules 

Another thing you don’t have to provision and that GCP manages for you: a firewall. VPCs give you a global distributed firewall you can control to restrict access to instances, both incoming and outgoing traffic. By default, all ingress traffic (incoming) is blocked. To connect to the C9800-CL instance once it is up and running, we need to allow SSH and HTTP/HTTPS communication by adding the ingress firewall rules. We will also allow ICMP, very useful for quick IP reachability checking.

We will also allow CAPWAP traffic from AP to join the WLC (UDP 5246-5247).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You can define firewall rules in terms of metadata tags on Compute Engine instances, which is really convenient.

As you can see, these are ACLs based on Targets with specific tags, meaning that I don’t need to base my access-lists on complex IP addresses but rather on tags that identify both sources and destinations. In this case, we can see that I permit http, https, icmp or CAPWAP to all targets or just to specific targets, very similar to what we do with Cisco TrustSec and SGTs. In my case, the C9800-CL belongs to all those tags, so I´m basically allowing all mentioned protocols needed.

Launching the Cisco Catalyst C9800-CL image on Google Cloud 

Launching a Cisco Catalyst 9800 occurs directly from the Google Cloud Platform Marketplace. Cisco Catalyst 9800 will be deployed on a Google Compute Engine (GCE) Instance (VM).

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

You then prepare for the deployment through a wizard that will ask you for parameters like hostname, credentials, zone to deploy, scale of your instance, networking parameters, etc. Really easy and intuitive.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

And GCP will magically deploy the system for you!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

(External IP is ephemeral, so not a big deal and will only be used during this test while instance is running).

MJIMENA-M-M0KF:~ mjimena$ ping 35.189.203.140

PING 35.189.203.140 (35.189.203.140): 56 data bytes

64 bytes from 35.189.203.140: icmp_seq=0 ttl=247 time=33.608 ms

64 bytes from 35.189.203.140: icmp_seq=1 ttl=247 time=31.220 ms

I have IP reachability. Let me try to open a web browser and…………I’m in!!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

After some initial GUI setup parameters, my C9800-CL is ready. With a WLAN (SSID) configured but with no Access Point registered yet.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

I access C9800 CLI with SSH (remember the firewall rule we configured in GCP).

MJIMENA-M-M0KF:~ mjimena$ ssh admin@35.189.203.140

The authenticity of host '35.189.203.140 (35.189.203.140)' can't be established.

RSA key fingerprint is SHA256:HI10434rnGdfQyHjxBA92ywdkib6nBYG6jykNRTddXg.

Are you sure you want to continue connecting (yes/no)? yes

Warning: Permanently added '35.189.203.140' (RSA) to the list of known hosts.

Password:

c9800-cl#

Let’s double check the version we are running:

c9800-cl#show ver | sec Version

Cisco IOS XE Software, Version 16.12.01

Cisco IOS Software [Gibraltar], C9800-CL Software (C9800-CL-K9_IOSXE), Version 16.12.1, RELEASE SOFTWARE (fc4)

Any neighbor there in the Cloud?

c9800-cl#show cdp neighbors

Capability Codes: R - Router, T - Trans Bridge, B - Source Route Bridge
                  S - Switch, H - Host, I - IGMP, r - Repeater, P - Phone,
                  D - Remote, C - CVTA, M - Two-port Mac Relay

Device ID        Local Intrfce     Holdtme    Capability  Platform  Port ID

Total cdp entries displayed : 0

Ok, makes sense…

The C9800 has a public IP associated to its internal IP address. We need to configure the controller to reply to AP joins with the public IP and not the private one. For that, type the following global configuration command, all on one line:

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 35.189.203.140

c9800-cl#sh run | i public

wireless management interface GigabitEthernet1 nat public-ip 3

And indeed, no AP yet.

c9800-cl#show ap summary

Number of APs: 0

c9800-cl#

Let’s plug that Cisco AP9117AX!

I connect a brand new Cisco AP9117ax to a mGIG/ UPOE port in a Cisco Catalyst 9300 switch at 5Gbps over copper.

I connect to console and type the following command to prime the AP to the GCP C9800-cl instance:

AP0CD0.F894.16BC#capwap ap primary-base c9800-cl 35.189.203.140

wireless management interface GigabitEthernet1 nat public-ip 3

This CLI resets connection with WLC to accelerate the join process.

AP0CD0.F894.16BC#capwap ap restart

I check reachability between AP at home and my WLC in GCP.

AP0CD0.F894.16BC#ping 35.189.203.140

Sending 5, 100-byte ICMP Echos to 35.189.203.140, timeout is 2 seconds

!!!!!

The Cisco AP9117AX is joining and downloading IOS-XE image.

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

My GUI is now showing the AP downloading the right image before joining.

My setup is done!

Cisco Prep, Cisco Tutorial and Material, Cisco Certification, Cisco Guides, Cisco Wireless

Monday, 11 May 2020

Cisco goes SONiC on Cisco 8000

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Since its introduction by Microsoft and OCP in 2016, SONiC has gained momentum as the open-source operating system of choice for cloud-scale data center networks. The Switch Abstraction Interface (SAI) has been instrumental in adapting SONiC to a variety of underlying hardware. SAI provides a consistent interface to ASIC, allowing networking vendors to rapidly enable SONiC on their platforms while innovating in the areas of silicon and optics via vendor-specific extensions. This enables cloud-scale providers to have a common operational model while benefiting from innovations in the hardware. The following figure illustrates a high-level overview of the platform components that map SONIC to a switch.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC has traditionally been supported on a single NPU system with one instance of BGP, SwSS (Switch State Service), and Synced container. It has been recently extended to support multiple NPUs in a system. This is accomplished by running multiple instances of BGP, Syncd, and other relevant containers, one per NPU instance.

SONiC on Cisco 8000


As part of Cisco’s continued collaboration with the OCP community, and following up on support for SONiC on Nexus platforms, Cisco now supports SONiC on fixed and modular Cisco 8000 Series routers. While the support for SONiC on fixed, single NPU systems is an incremental step, bringing in another cisco ASIC and platform under SONiC/SAI, support for SONiC on a modular platform marks a significant milestone in adapting modular routing systems to support SONiC in a fully distributed way. In the rest of the blog, we will look at the details of the chassis-based router and how SONiC is implemented on Cisco 8000 modular systems.

Cisco 8000 modular system architecture


Let’s start by looking deeper into a Cisco 8000 modular system. A modular system has the following key components – 1) One or two Router Processors 2) Multiple Line Cards 3) Multiple Fabric cards 4) Chassis commons such as FANs, Power Supply Units, etc. The following figure illustrates the RP, LC, and FC components, along with their connectivity.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

The NPUs on the line cards and the fabric cards within a chassis are connected in a CLOS network. The NPUs on each line card are managed by the CPU on the corresponding line card and the NPUs on all the fabric cards are managed by the CPU(s) on the RP cards. The line card and fabric NPUs are connected over the backplane. All the nodes (LC, RP) are connected to the external world via an Ethernet switch network within the chassis.

This structure logically represents a single layer leaf-spine network where each of the leaf and spine nodes are a multi-NPU system.

From a forwarding standpoint, the Cisco 8000 modular system works as a single forwarding element with the following functions split among the line card and fabric NPUs:

◉ Ingress line card NPU performs functions such as tunnel termination, packet forwarding lookups, multi-stage ECMP load balancing, and ingress features such as QoS, ACL, inbound mirroring, and so on. Packets are then forwarded towards the appropriate egress line card NPU using a virtual output queue (VOQ) that represents the outgoing interface, by encapsulating the packet in a fabric header and an NPU header. Packets are sprayed across the links towards the fabric to achieve a packet-by-packet load balancing.

◉ Fabric NPU processes the incoming fabric header and sends the packet over one of the links towards the egress line card NPU.

◉ Egress LC NPU processes the incoming packet from the fabric using the information in the NPU header to perform the egress functions on the packet such as packet encapsulation, priority markings, and egress features such as QoS, ACL and so on.

In a single NPU fixed system, the ingress and egress functions described above are all performed in the same NPU as the fabric NPU functionality obviously doesn’t exist.

SONiC on Cisco 8000 modular systems


The internal CLOS enables the principles of leaf-spine SONiC design to be implemented in the Cisco 8000 modular system. The following figure shows a SONiC based leaf-spine network:

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each node in this leaf-spine network runs an independent instance of SONiC. The leaf and spine nodes are connected over standard Ethernet ports and support Ethernet/IP based forwarding within the network. Standard monitoring and troubleshooting techniques such as filters, mirroring, traps can also be employed in this network at leaf and spine layers. This is illustrated in the figure below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

Each line card runs an instance of SONiC on the line card CPU, managing the NPUs on that line card. One instance of SONiC runs on the RP CPU, managing all the NPUs on the fabric cards. The line card SONiC instances represent the leaf nodes and the RP SONiC instance represents the spine node in a leaf-spine topology.

The out-of-band Ethernet network within the chassis provides external connectivity to manage each of the SONiC instances.

Leaf-Spine Datapath Connectivity

This is where the key difference between a leaf-spine network and the leaf-spine connectivity within a chassis comes up. As discussed above, a leaf-spine network enables Ethernet/IP based packet forwarding between them. This allows for standard monitoring and troubleshooting tools to be used on the spine node as well as on the leaf-spine links.

Traditional forwarding within a chassis is based on fabric implementation using proprietary headers between line cards and fabric NPUs. In cell-based fabrics, the packet is further split into fixed or variable sized cells and sprayed across the available fabric links. While this model allows the most optimal link utilization, it doesn’t allow standards-based monitoring and troubleshooting tools to be used to manage the intra-chassis traffic.

Cisco Silicon One ASIC has a unique ability to enable Ethernet/IP based packet forwarding within the chassis as it can be configured in either network mode or fabric mode. As a result, we use the same ASIC on the line cards and fabric cards by configuring the interfaces between the line card and fabric in fabric mode while the network-facing interfaces on the line card are configured in network mode.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

This ASIC capability is used to implement the leaf-spine topology within Cisco 8000 chassis by configuring the line card – fabric links in network mode, as illustrated below.

SP360: Service Provider, Cisco 8000, Cisco Tutorial and Material, Cisco Learning, Cisco Guides, Cisco Exam Prep

SONiC on the line cards exchange routes using a per NPU BGP instance that peers with each other. SONiC on each line card thus runs one instance of BGP per NPU on the line card, which is typically a small number (low single digits). On the other hand, RP SONiC manages a larger number of fabric NPUs. To optimize the design, fabric NPUs are instead configured in a point-to-point cross-connect mode providing virtual pipe connectivity among every pair of line card NPUs. This cross-connect can be implemented using VLANs or other similar techniques.

Packets across the fabric are still exchanged as Ethernet frames enabling monitoring tools such as mirroring, sFlow, etc., to be enabled on the fabric NPUs thus providing end-to-end visibility of network traffic, including the intra-chassis flows.

For the use cases that need fabric-based packet forwarding within the chassis, the line card – fabric links can be reconfigured to operate in fabric mode, allowing the same hardware to cater to a variety of use cases.

Sunday, 10 May 2020

The four-step journey to securing the industrial network

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Certifications

Just as the digitization and increasing connectivity of business processes has enlarged the attack surface of the IT environment, so too has the digitization and increasing connectivity of industrial processes broadened the attack surface for industrial control networks. Though they share this security risk profile, the operational technology (OT) environment is very different from that of IT. This post looks at the key differences and provides a four-step approach to securing the industrial network.

In industries like utilities, manufacturing, and transportation, the operations side of the business is revenue generating. As a result, uptime is critical. While uptime is important in IT, interdependencies in the OT environment make it challenging to maintain uptime while addressing security threats. For example, you can’t simply isolate an endpoint that’s sending anomalous traffic. Because of the interdependencies of that endpoint, isolating it can have a cascading effect that brings a critical business process to a grinding halt. Or, worse, human lives may be put at risk. It’s important to understand the context of security events so that they can be addressed while maintaining uptime.

With uptime requirements in mind, securing the industrial network can feel like an insurmountable challenge. Many industrial organizations don’t have visibility into all of the devices that are on their OT networks, let alone the dependencies among them. Devices have been added over time, often by third-party contractors, and an asset inventory is either non-existent or grossly outdated. Bottom line: organizations lack visibility into the operational technology environment.

To help industrial organizations address these challenges and effectively secure the OT environment, we’ve put together a four-step journey to securing the industrial network. It’s important to note that while we call it a journey, there is no defined beginning or end. It’s an iterative process that requires continual adjustments. The most important thing is to start wherever you happen to be today.

Cisco Prep, Cisco Tutorial and Material, Cisco Exam Prep, Cisco Certifications
There are many places from which to begin, and what makes a logical first step for one organization will not necessarily be the same for another. One approach is to start with gaining visibility through asset discovery. By analyzing network traffic, deep packet inspection (DPI) can identify the industrial assets connected to your network. With this visibility, you can make an informed decision on the best way to segment the network to limit the spread of an attack.

In addition to identifying assets, DPI identifies which assets are communicating, with whom or what they are communicating, and what they are communicating. With this baseline established, you can detect anomalous behavior and potential threats that may threaten process integrity. This information can then be fed into a unified security operations center (SOC), providing complete visibility to the security team.

How you deploy DPI is important. Embedding a DPI-enabled sensor on switches saves hardware costs and physical space, which can be at a premium, depending on the industry. DPI-enabled sensors allow you to inspect traffic without encountering deployment, scalability, bandwidth, or maintenance hurdles. Because switches see all network traffic, embedded sensors can provide the visibility you need to segment the network and detect threats early on. The solution can also integrate with the IT SOC while providing analytical insights into every component of the industrial control system. With DPI-enabled network switches, industrial organizations can more easily move through the four-step journey to securing the industrial network.

Saturday, 9 May 2020

A Mindset Shift for Digitizing Software Development and Delivery

Cisco Tutorial and Material, Cisco Study Material, Cisco Learning, Cisco Exam Prep

At Cisco, my teams—which are part of the Intent-Based Networking Group—focus on the core network layers that are used by enterprise, data center, and service provider network engineering. We develop tools and processes that digitize and automate the Cisco Software Development Lifecycle (CSDL). We have been travelling the digitization journey for over two years now and are seeing significant benefits. This post will explain why we are working diligently and creatively to digitize software development across the spectrum of Cisco solutions, some of our innovations, and where we are headed next.

Why Cisco Customers Should Care About Digitization of Software Development and Delivery


Cisco customers should consider what digitization of software development means to them. Because many of our customers are also software developers—whether they are creating applications to sell or for internal digital transformation projects—the same principles we are applying to Cisco development can be of use to a broader audience.

Digitization of development improves total customer experience by moving beyond just the technical aspects of development and thinking in terms of complete solutions that include accurate and timely documentation, implementation examples, and analytics that recommend which release is best for a particular organization’s network. Digitization of development:

◉ Leads to improvements in the quality, serviceability, and security of solutions in the field.

◉ Delivers predictive analytics to assist customers to understand, for example, the impact an upgrade, security patches, or new functionality will have on existing systems, with increased assurance about how the network will perform after changes are applied. 

◉ Automates the documentation of each handoff along the development lifecycle to improve traceability from concept and design to coding and testing.

These capabilities will be increasingly important as we continue to focus on developing solutions for software subscriptions, which shift the emphasis from long cycles creating feature-filled releases to shorter development cycles delivering new functionality and customer-requested innovations in accelerated timeframes.

Software Developers Thrive with Digital Development Workflows


For professionals who build software solutions, the digitization of software development focuses on improving productivity, consistency, and efficiency. It democratizes team-based development—that is, everyone is a developer: solution architects, designers, coders, and testers. Teams are configured to bring the appropriate expertise to every stage of solution development. Test developers, for example, should not only develop test plans and specific tests, but also provide functional specifications and code reviews, build test automation frameworks, and represent customer views for validating solutions at every stage of development. Case in point, when customer-specific uses cases are incorporated early into the architecture and design phases, then the functionality of the intended features are built into test suites as code is being written.

A primary focus of digitization of development is creating new toolsets for measuring progress and eliminating friction points. Our home-grown Qualex (Quality Index) platform provides an automated method of measuring and interpreting quality metrics for digitized processes. The goal is to eliminate human bias by using data-driven techniques and self-learning mechanisms. In the past 2 years, Qualex has standardized most of our internal development practices and is saving the engineering organization a considerable amount of time and expense for software management.

­Labs as a Service (LaaS) is another example of applying digitization to transform the development cycle that also helps to efficiently manage CAPEX. Within Cisco, LaaS is a ready-to-use environment for sharing networking hardware, spinning up virtual routers, and providing on-demand testbed provisioning. Developers can quickly and cost effectively design and setup hardware and software environments to simulate various customer use cases, including public and private cloud implementations.

Digitization Reduces Development Workflow Frictions


A major goal of the digitization of software development is to reduce the friction points during solution development. We are accomplishing this by applying AI and machine learning against extensive data lakes of code, documentation, customer requests, bug reports, and previous test cycle results. The resulting contextual analytics will be available via a dashboard at every stage of the development process, reducing the friction of multi-phase development processes. This will make it possible for every developer to have a scorecard that tracks technical debt, security holes, serviceability, and quality. The real-time feedback increases performance and augments skillsets, leading to greater developer satisfaction.

Cisco Tutorial and Material, Cisco Study Material, Cisco Learning, Cisco Exam Prep

Workflow friction points inhibit both creativity and productivity. Using analytics to pinpoint aberrations in code as it is being developed reduces the back and forth cycles of pinpointing flaws and reproducing them for remediation. Imagine a developer writing new code for a solution which includes historical code. The developer is not initially familiar with the process or the tests that the inherited code went through. With contextual analytics presenting relevant historical data, the developer can quickly come up to speed and avoid previous mistakes in the coding process. We call this defect foreshadowing. The result is cleaner code produced in less time, reduced testing cycles, and better integration of new features with existing code base.

Digitizing Development Influences Training and Hiring

Enabling a solution view of a project—rather than narrow silos of tasks—also expands creativity and enhances opportunities to learn and upskill, opening career paths. The cross-pollination of expertise makes everyone involved in solution development more knowledgeable and more responsive to changes in customer requirements. In turn everyone gains a more satisfying work experience and a chance to expand their career.

◈ Training becomes continuous learning by breaking down the silos of the development lifecycle so that individuals can work across phases and be exposed to all aspects of the development process.

◈ Automating tracking and analysis of development progress and mistakes enables teams to pinpoint areas in which people need retraining or upskilling.

◈ Enhancing the ability to hire the right talent gets a boost from digitization as data is continuously gathered and analyzed to pinpoint the skillsets that contribute the most to the successful completion of projects, thus refining the focus on the search for talent.

Join Our Journey to Transform Software Development


At Cisco we have the responsibility of carrying the massive technical debt created since the Internet was born while continuously adding new functionality for distributed data centers, multi-cloud connectivity, software-defined WANs, ubiquitous wireless connectivity, and security. To manage this workload, we are fundamentally changing how Cisco builds and tests software to develop products at web-scale speeds. These tools, which shape our work as we shape them, provide the ability to make newly-trained and veteran engineers capable of consistently producing extraordinary results.

Cisco is transforming the solution conception to development to consumption journey. We have made significant progress, but there is still much to accomplish. We invite you to join us on this exciting transformation. As a Cisco Network Engineer, you have the opportunity to create innovative solutions using transformative toolsets that make work exciting and rewarding as you help build the future of the internet. As a Cisco DevX Engineer, you can choose to focus on enhancing the evolving toolset with development analytics and hyper-efficient workflows that enable your co-developers to do their very best work. Whichever path you choose, you’ll be an integral member of an exclusive team dedicated to customer success.

Friday, 8 May 2020

Simplifying the DevOps and NetOps Journey using Cisco SD-WAN Cloud Hub with Google Cloud

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco SD-WAN

Cisco and Google Cloud have partnered to bridge cloud applications and enterprise networks by creating the new Cisco SD-WAN Cloud Hub with Google Cloud. This solution, built around Cisco SD-WAN technology and Google Cloud, simplifies the NetOps and DevOps journey by automating the allocation of SD-WAN network resources to meet application requirements.

Modern enterprise applications are composed of multiple services deployed across on-premise and cloud environments. NetOps and DevOps teams maintain the infrastructure that hosts, connects, and delivers these services. The goal of these teams: optimize the application experience. But due to the complexity of the infrastructure and the dynamism of application flows, this can be challenging. Each time a new application is deployed, the NetOps team must collect the application requirements from the DevOps team and render them into the appropriate network policy.

In this post we look at how Cisco SD-WAN Cloud Hub with Google Cloud simplifies workflow for DevOps and NetOps, by automating the tasks needed to deliver a better application experience.

Here we focus on the benefits brought by the solution in terms of automation.  There are some benefits that we don’t cover in this article but are also part of the solution, such as the improved security and segmentation, the enhanced multi-cloud operation, and how Cisco SD-WAN Cloud Hub can enable traffic steering through the Google Cloud backbone network.

How DevOps Will Use Cisco SD-WAN Cloud Hub with Google Cloud


DevOps teams are interested in what the network can do for their application, rather than in how it is done. From their perspective, the network is a component meant to support specific application demands. To that end, DevOps will focus on properly classifying services according to certain traffic profiles, for instance Video Streaming or VoIP. These profiles are agreed on beforehand with NetOps, and allow DevOps to express the networking needs of the services. DevOps leaves it to NetOps to best configure the network to handle each profile.

In our new solution, the DevOps team uses Google Cloud Service Directory to publish the traffic profile that best represents the network traffic generated by a given application. They can use different traffic profiles for different services, as needed. The integration of Service Directory with Google Cloud Identity and Access Management (IAM) ensures that only those in the DevOps team with the appropriate permissions can modify the traffic profile for a service.

For example, DevOps and NetOps may agree that the services can be classified according to four profiles: standard, data, streaming and conferencing. The DevOps then use Service Directory to associate the following metadata to each service deployed: “traffic: standard”, “traffic: data”, “traffic: streaming”, and “traffic: conferencing”. (The metadata used here is an example to illustrate the flexibility of the solution; different teams may define different profiles.)

Let’s say that the DevOps team is deploying two services with different networking needs. They want to make sure that the traffic for each is properly handled in the SD-WAN. One service is a heavy-load database backup application, with high bandwidth requirements, while the other is a screen sharing service, not only sensitive to latency but also to packet loss. Following the metadata convention agreed with the NetOps team, the DevOps team marks these services as “traffic: data” and “traffic: conferencing”, respectively.

Cisco Exam Prep, Cisco Tutorial and Material, Cisco Learning, Cisco SD-WAN

How NetOps Leverages the Solution


The NetOps team, for its part, has a deep knowledge of the network and can efficiently optimize it to meet application requirements. The NetOps team uses Cisco vManage (Cisco’s centralized SD-WAN management platform) to program detailed network policies that map how each traffic profile should be rendered over the SD-WAN.

The NetOps team may decide to configure policies that specify that traffic with the profile “standard” should go through a best effort tunnel, that “data” should go through a high bandwidth tunnel, and that “streaming” and “conferencing” should go through low latency tunnels. The NetOps team can further fine tune the policies to specify that traffic for “conferencing” services should go, when possible, through highly-reliable links over the Google Cloud backbone to minimize packet loss.

Thanks to the integration with Service Directory, vManage can discover in real-time an application’s characteristics and its networking needs. vManage, by constantly monitoring Service Directory, acts whenever new relevant information becomes available. For instance, as soon as the database backup service is deployed, vManage automatically retrieves the associated metadata via Service Directory and dynamically renders the network policy defined by the NetOps team. In this example, the rendered network policies steer the database backup traffic through a high-bandwidth SD-WAN path. Likewise, a conferencing application, say the aforementioned screen-sharing service, would see its traffic steered to the Google Cloud backbone.

In this way the network automatically adapts, in real-time, not only to new applications, but also to changes in application requirements or changes in existing traffic profiles. The most relevant and effective network policies are always enforced and specifically tailored for each service. This greatly simplifies operations for both NetOps and DevOps teams, which now only need to make sure that the intended application profiles and network policies are in place. Service Directory and vManage coordinate to dynamically render the most effective network optimizations.

The integration of Cisco vManage and Google Cloud Service Directory leads to an improved application experience and more efficient use of SD-WAN resources.

Additional automatic traffic steering is also a part of the solution, thanks to the automatic aggregation of data about applications, obtained via Service Directory, correlated with vManage’s detailed, real-time view of network infrastructure. For instance, prior to this integration, minimal losses on a high-bandwidth link may not trigger special actions on regular SD-WAN operation. With this solution in place, vManage, knowing that there is traffic over this link that belongs to a “conferencing” application, might automatically steer that traffic through an alternate, non-lossy link. Similarly, knowing that “standard” applications are not sensitive to small losses, vManage can take advantage of the bandwidth just made available to automatically allocate flows for “standard” applications over the lossy link.

To conclude, Cisco SD-WAN Cloud Hub with Google Cloud leverages Cisco SD-WAN and Google Cloud Service Directory to simplify the journey of the NetOps and the DevOps teams, automating the allocation of SD-WAN network resources to match applications’ demands, optimizing the application experience. All without disrupting the continuous flow with which applications are developed, deployed and supported.