Sunday, 3 April 2022

New in SecureX: Device Insights

Since its release, Cisco SecureX has helped over 10,000 customers gain better visibility into their infrastructure. As the number of devices in many customer environments continues to increase, so does the number of products with information about those devices. Between mobile device managers (MDM), posture agents, and other security products, a wealth of data is being collected but is not necessarily being shared or, more importantly, correlated. With the new device insights feature in Cisco SecureX, now available for all SecureX customers, we’re changing that.

Introducing Device Insights

Device insights, which is now generally available, extends our open, platform approach to SecureX by allowing you to discover, normalize, and consolidate information about the devices in your environment. But this isn’t just another dashboard pulling data from multiple sources. Device insights fetches data from sources you might expect, like your mobile device manager, but also leverages the wealth of data available in your Cisco Secure products such as Cisco Secure Endpoint, Orbital, Duo, and Umbrella. Combining these sources of data allows you to discover devices that may be sneaking through gaps in your normal device management controls and gain a comprehensive view into each device’s security posture and management status. With device insights, you’ll be able to answer these all-important questions:

◉ What types of devices are connected in our environment?

◉ What users have been accessing those devices?

◉ Where are those devices located?

◉ What vulnerabilities are associated with each device?

◉ Which security agents are installed?

◉ Is the security software is up to date?

◉ What context do we have from technologies beyond the endpoint?

Supported Data Sources

Now, you might ask: what types of data can I bring into device insights? When we created SecureX, we built a flexible architecture based on modules that anyone can create. Device insights extends this architecture by adding a new capability to our module framework. Here’s a look at what data sources will be supported at launch:

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

Bringing Everything Together


Once you’ve enabled your data sources, device insights will periodically retrieve data from each source and get to work. Some sources can also publish data in real time to device insights using webhooks. We normalize all of the data and then correlate it between sources so you have one view into each of your devices, not a mess of duplicate information. This results in a single, unified dashboard with easy filtering, a high level view into your environment, and a customizable table of devices (which you can export too!). To see more information about a device, just click on one and you’ll see everything device insights knows, including which source provided which data.

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

SecureX, Cisco Exam, Cisco Exam Prep, Cisco Career, Cisco Prep, Cisco Skills, Cisco Job

Source: cisco.com

Saturday, 2 April 2022

Cisco SD-Access in Healthcare: A Comprehensive Secure Access Solution for a Changing Industry

Cisco SD-Access, Healthcare, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco HIT

The healthcare industry is undergoing unprecedented change. The pandemic has accelerated the process of digitization and the need for an always available and secure digital infrastructure. In particular, Healthcare IT (HIT) faces several significant challenges:

◉ Prevent security breaches across hospitals, clinics, and research centers

◉ Protect patient and research data through standards, integration, and governance

◉ Understand and support technological innovations in healthcare

◉ Provide simple, secure access to data and analytics to all key stakeholders

To address these challenges and support the connectivity and security needs of hospitals, branch clinics, and telehealth, HIT needs to build and maintain a resilient network architecture that is secure, automated, and provides a continuous feedback loop with rich analytics.

Cisco Software-Defined Access (SD-Access) is a network controller-based solution that helps organizations enable policy-based automation to address access control and segmentation. With its broad adoption in healthcare organizations worldwide, a set of use cases and best practices have emerged that demonstrate how HIT is using Cisco SD-Access to address changing network requirements and meet the needs of the healthcare workforce and patients.

Cisco SD-Access, Healthcare, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Learning, Cisco Certification, Cisco HIT
Cisco SD-Access

Simplify Network Expansion


Healthcare networks in the modern all-digital world must provide service-level resiliency and be modifiable on demand. Cisco SD-Access provides ample support for site additions and site expansions and is flexible enough to spin up a new site in hours. It provides full lifecycle management of existing campus and branch environments in a simple and secure manner.

SD-Access starts with providing workflows for automating the physical network underlay using the LAN automation capabilities in Cisco DNA Center. Lan automation simplifies network operations and provides a zero-touch plug-and-play workflow. LAN automation can also quickly expand the network using extended nodes to spaces such as parking lots and warehouses.

HIT can build an automated network fabric and seamlessly connect external networks to the fabric borders. The network fabric also provides capabilities for HIT to connect their current networks to the fabric edges and extend security and segmentation benefits. SD-Access enables the creation of new branch and remote sites on-demand―from small fabric in a box for branches to extensive deployments with thousands of switches. It provides zero-touch network automation to bring up the routing underlay along with setting up the fabric and managing day-N operations on the network. All of this is possible through an intent-based network interface in Cisco DNA Center.

Built-in Network Fault Tolerance and Service Resiliency


The healthcare network is mission-critical, requiring minimal downtime. The services and the network must be highly resilient to support healthcare workers and patients. Cisco SD-Access is built on a highly fault-tolerant fabric architecture with redundant elements at all critical points. This includes fully redundant network peering points, control plane elements, StackWise Virtual Links (SVLs), and stacking on edge switches. Additionally, the services are always available through a Cisco DNA Center three-node clustered management system and fully distributed multi-node Cisco Identity Service Engine (ISE). The design of the network is flexible to accommodate even the most stringent needs of healthcare networks.

Secure Segmentation Based on Organizational Functions


Healthcare organizations have separate departments performing different and unique functions. HIT has found it highly useful to segment and secure communication among these different organizational entities.

Beyond communications, healthcare systems must safeguard the medical records and financial information of patients. In the U.S., hospitals and medical centers are required to have Health Insurance Portability and Accountability Act (HIPAA)-compliant wired and wireless networks that can provide complete and constant visibility into network traffic. These networks must protect sensitive data and medical devices such as electronic medical records (EMR) servers, vital sign monitors, and nurse workstations from malicious devices that seek to compromise the network. Prescription drug safes should be able to communicate with respective destinations even during a network impact, such as Cisco ISE being temporarily unavailable. Administrators can implement a critical VLAN for fabric edges, where devices like prescription safes reside, when access verification services are unreachable.

Close collaboration between healthcare staff and instantaneous access to a comprehensive view of health-related data, aggregated and collocated from the many disparate segments, is placing increasing demands on the network infrastructure. Cisco SD-Access architecture provides automated network fabric configurations, identity-based policy and segmentation, AI-driven insights, and telemetry services.

Cisco SD-Access addresses the need for complete data and control plane isolation between patient and visitor devices and medical and research facility devices by using macro segmentation. By onboarding devices into different overlay virtual networks (VNs), healthcare facilities can achieve complete data isolation and provide security among different departments and users.

Provide Rich Network Services


One of the biggest demands on the healthcare IT network infrastructure is to handle guest and patient traffic separate from staff and sensitive patient data. Mobility and roaming across campus buildings are therefore key requirements for healthcare networks. Cisco SD-Access has a built-in Fabric Enabled Wireless (FEW) architecture that enables seamless mobility for endpoints and devices connected to the edge of the network.

In a healthcare facility, various medical devices are in different locations but should be managed in a unified manner for proper usage and availability. SD-Access allows IT to place these devices in a separate virtual network and routed to a common border over a tunneled interface. This provides clean and secure segmentation of anchored traffic to a common exit point in the network.

Another important requirement of healthcare networks today is the ability to access medical records, security camera recordings across sites, staff records, and other sensitive data from a central server. In most cases, these data sets need to be accessible on-demand at a subset of branch sites. Cisco SD-Access helps in creating groups of sites that need to receive these types of records through its built-in multicast features.

Improve Network Visibility and Assurance


Network administrators should be able to efficiently manage and monitor their networks to quickly respond to the dynamic needs of healthcare systems. To improve the performance of a network, attached devices, and applications, the deployment should use telemetry to proactively predict performance and security risks.

Cisco DNA Center with Cisco SD-Access Assurance provides a comprehensive solution that addresses not just reactive network monitoring but also enables proactive monitoring with network health and issue dashboards. In addition to the network, client, and application health dashboards, the SD-Access Health Dashboard provides analytics and insights for both network underlay and fabric overlay by correlating actionable insights based on a wide variety of telemetry data ingested from sources throughout the network.

SD-Access provides visibility insights into the fabric, virtual network health, transit, and peer network connectivity health using a health score metric. The health of the fabric is quantified using Key Performance Indicators (KPIs) of the operational state of the fabric. These KPIs are also used to identify issues in the fabric. The operational data is collected from fabric devices using telemetry.

Source: cisco.com

Tuesday, 29 March 2022

Hyperconverged Infrastructure with Harvester: The start of the Journey

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills

Deploying and running data center infrastructure management – compute, networking, and storage – has traditionally been manual, slow, and arduous. Data center staffers are accustomed to doing a lot of command line configuration and spending hours in front of data center terminals. Hyperconverged Infrastructure (HCI) is the way out: It solves the problem of running storage, networking, and compute in a straightforward way by combining the provisioning and management of these resources into one package, and it uses software defined data center technologies to drive automation of these resources. At least in theory.

Recently, a colleague and I have been experimenting with Harvester, an open source project to build a cloud native, Kubernetes-based Hyperconverged Infrastructure tool for running data center and edge compute workloads on bare metal servers.

Harvester brings a modern approach to legacy infrastructure by running all data center and edge compute infrastructure, virtual machines, networking, and storage, on top of Kubernetes. It is designed to run containers and virtual machine workloads side-by-side in a data center, and to lower the total cost of data center and edge infrastructure management.

Why we need hyperconverged infrastructure

Many IT professionals know about HCI concepts from using products from VMWare, or by employing cloud infrastructure like AWS, Azure, and GCP to manage Virtual Machine applications, networking, and storage. The cloud providers have made HCI flexible by giving us APIs to manage these resources with less day-to-day effort, at least once the programming is done. And, of course, cloud providers handle all the hardware – we don’t need to stand up our own hardware in a physical location.

Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Learning, Cisco Career, Cisco Certification, Cisco Skills
Multi-node Harvester cluster

However, most of the current products that support converged infrastructure tend to lock customers to using their company’s own technology, and they also usually come with licensing fees. Now, there is nothing wrong with paying for a technology when it helps you solve your problem. But single-vendor solutions can wall you off from knowing exactly how these technologies work, limiting your flexibility to innovate or react to issues.

If you could use a technology that combines with other technologies you are already required to know today – like Kubernetes, Linux, containers, and cloud native – then you could theoretically eliminate some of the headaches of managing edge compute / data centers, while also lowering costs.

This is what the people building Harvester are attempting to do.

Adapting to the speed of change


Cloud providers have made it easier to deploy and manage the infrastructure surrounding applications. But this has come at the expense of control, and in some cases performance.

HCI, which the cloud providers support and provide, gets us some control back. However, the recent rise of application containers, over virtual machines, changed again how infrastructure is managed and even thought of, by abstracting layers of application packaging, all while making that packaging lighter weight than last-generation VM application packaging. Containers also provide application environments that are  faster to start up, and easier to distribute because of the decreased image sizes. Kubernetes takes container technologies like Docker to the next level by adding in networking, storage, and resource management between containers, in an environment that connects everything together. Kubernetes allows us to integrate microservice applications with automation and speedy deployments.

Kubernetes offers an improvement on HCI technologies and methodologies. It provides a better way for developers to create cloud agnostic applications, and to spin up workloads in containers more quickly than traditional VM applications. Kubernetes did not aim to replace HCI, but it did make a lot of the goals of software deployment and delivery simpler, from an HCI perspective.

In a lot of environments, Kubernetes runs inside VMs. So you still need external HCI technology to manage the underlying infrastructure for the VMs that are running Kubernetes. The problem now is that if you want to run your application in Kubernetes containers on infrastructure you have control of, you have different layers of HCI to support.  Even if you get better application management with Kubernetes, infrastructure management becomes more complex. You could try to use vanilla Kubernetes for every part of your edge-compute / data center stack and run it as your bare metal operating system instead of traditional HCI technologies, but you have to be ok migrating all workloads to containers, and in some cases that is a high hurdle to clear, not to mention the HCI networking that you will need to migrate over to Kubernetes.

The good news is that there are IoT and Edge Compute projects that can help. The Rancher organization, for example is creating a lightweight version of Kubernetes, k3s, for IoT compute resources like the Raspberry Pi and Intel NUC computers. It helps us push Kubernetes onto more bare metal infrastructure. Other orgs, like KubeVirt, have created technologies to run virtual machines inside containers and on top of Kubernetes, which has helped with the speed of deployment for VMs, which then allow us to use Kubernetes for our virtual networking layers and all application workloads (container and VMs). And other technology projects, like Rook and Longhorn, help with persistent storage for HCI through Kubernetes.

If only these could combine into one neat package, we would be in good shape.

Hyperconverged everything


Knowing where we have come from in the world of Hyperconverged Infrastructure for our Data Centers and our applications, we can now move on to what combines all these technologies together. Harvester packages up k3s (light weight Kubernetes), KubeVirt (VMs in containers), and Longhorn (persistent storage) to provide Hyperconverged Infrastructure for bare metal compute using cloud native technologies, and wraps an API / Web GUI bow on it to for convenience and automation.

Source: cisco.com

Saturday, 26 March 2022

Why Transition to BGP EVPN VXLAN in Enterprise Campus

Network Virtualization Convergence in Enterprise Campus

Campus networks are the backbone of enterprises providing connectivity to critical services and applications. Throughout time many of these networks were deployed with a variety of overlay technologies including technologies to accomplish the desired outcome. While these traditional overlay technologies accomplished the technical and business requirements, many of them lacked manageability and scalability introducing complexity into the network. The industry-standard BGP EVPN VXLAN is a converged overlay solution providing unified control-plane-based layer-2 extension and layer-3 segmentation over IP underlay. The purpose-built technology for Enterprise campus and datacenter addresses the well-known classic networking protocols challenges while providing L2/L3 network services with greater flexibility, mobility, and scalability.

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification
Fig #1: BGP EVPN VXLAN converges Layer 2 and Layer 3

Legacy Layer 2 Overlay Networks Departure


Enterprise campus networks have historically been deployed with several types of Layer 2 overlay network extensions as products and technologies evolved. Classic data-plane based Layer 2 extended networks built upon a flood-n-learn basis can be significantly simplified, scaled, and optimized when migrating away to next-generation BGP EVPN VXLAN solution:

◉ STP – Enterprise campus networks have operated spanning-tree protocol (STP) since its inception. Several enhancements and alternatives have been developed to simplify and optimize STP complexity, however, it continued to be challenging. The BGP EVPN VXLAN replaces STP with an L2 overlay enabling new possibilities to IT including controlling flood-domain size, suppressing redundant ARP/ND network traffic, and seamless mobility while retaining the original IPv4/v6 address plan when transitioning from Distribution switch or centralized firewall gateway running over STP network.

◉ 802.1ad – The IEEE 802.3ad (QinQ) is a common multi-tenant Layer 2 network solution. The dual-stack IEEE 802.1Q header tunnels individual tenant VLANs over limited and managed core VLANs to assist in reducing the bridging domain and overlapping tenant VLAN IDs across the core network. BGP EVPN VXLAN enables the opportunity to transform the Layer 2 backbone network with a simplified IP transport utilizing VXLAN and continue to bridge single or dual-stack IEEE 802.1Q VLAN across the fabric. 

◉ L2TPv3 – Layer 2 Protocol Tunnel version 3 (L2TPv3) provides simple point-to-point L2 overlay extension solution over an IP core between statically paired remote network devices. Such flood-n-learn based Layer 2 overlay networks can be migrated to BGP EVPN VXLAN providing far advanced and flexible Layer 2 extension solutions across an IP core network. 

◉ VPWS/VPLS – The standards ratified several Layer 2 network extensions as the industry evolved towards high-speed Metro-Ethernet networking across MAN/WAN. The Enterprise networks quickly evolve adopting Ethernet over MPLS (EoMPLS) or Virtual Private LAN Service (VPLS) solution operating over IP/MPLS based backbone. The Enterprise network can be simplified, optimized, and resilient with BGP EVPN VXLAN supporting flexible Layer 2 overlay topologies with control-plane based Layer 2 extensions that assist in improving end-to-end network performance and user experience. 

Traditional Layer 3 Overlays Convergence


Like Layer 2 extended networks, segmented Layer 3 networks can be deployed with various overlay technologies. The parallel running protocol set with each supporting either routing or bridging may add complexity as network growth and demands expand linearly. As BGP EVPN VXLAN converges routing and bridging capabilities it assists in reducing control-plane and operational tasks resulting in simplicity, scale, and resiliency.

◉ Multi-VRF – A simple hop-by-hop Layer 3 virtual network segmenting Layer 3 physical interface into logical IEEE 802.Q VLAN for each virtual network small to mid-size network environments. As segmentation requirements increase, IT operational challenges and control-plane overhead to manage Multi-VRF also increase. The BGP EVPN leverages IP VRF to dynamically build a segmented routed network environment and with VXLAN the data-plane segmentation is managed at the network edge enabling simplified underlay IP core and scalable Layer 3 overlay routed network solution. 

◉ GRE – An ideal solution for building overlay networks across IP networks without implementing hop-by-hop in the underlay network. The GRE-based overlay solution supports limited point-to-point or point-to-multipoint topologies.  Following similar principles, the BGP EVPN VXLAN can simplify the network with a single control plane, dynamically build VXLAN tunnels, and supports flexible overlay routing topologies. The ECMP based underlay and overlay networks support best-in-class resiliency for mission-critical networks.  

◉ MPLS VPN – The MP-BGP capabilities have been widely adopted in large Enterprises addressing network segmentation across self-managed IP/MPLS managed networks. The well-proven and scalable MPLS VPN in Enterprise overcomes several alternative technologies challenges using shim-layer label switching solution. The MPLS VPN enabled Enterprise networks can extend existing MP-BGP designs and transition VPNv4/VPNv6 to new L2VPN EVPN address-family supporting seamless migration. The edge-to-edge VXLAN data-plane can converge MPLS VPNs, mVPN, and VPLS overlay into a single unified control plane and enable enhanced integrated routing and bridging function. It further assists in greatly simplifying IP core network without MPLS LDP protocol dependencies across the paths. 

Cisco Catalyst 9000 – Seamless and Flexible BGP EVPN VXLAN Transition


Transitioning from classic products and technologies has never been an easier task, especially when mission-critical downtime is practically impossible. The Cisco Catalyst 9000 combined with 30+ years of software innovation with the industry’s most sophisticated network operating system Cisco IOS-XE® provides great levels of flexibility to seamlessly adapt BGP EVPN VXLAN for Enterprise customers as part of an existing operation or planning to begin a new networking journey while maintaining full-backward compatibility with classic products and overlays networks supporting non-stop business communications. 

Cisco Exam Prep, Cisco Tutorial and Materials, Cisco Career, Cisco Skills, Cisco Jobs, Cisco Certification
Fig #2: BGP EVPN VXLAN design alternatives

The end-to-end network and rich feature integration can be enabled independent of how underlying network infrastructure is built as illustrated above.

  Layer 3 Access Cisco StackWise Virtual  ESI Layer 2 Multihome 
Leaf Layer  Access  Distribution  Distribution 
Spine Layer   Core or other     
Border Layer   Data Center ACI, WAN, DMZ or more     
Overlay Network Type Support   Layer 3 Routed, Distributed AnyCast Gateway (Symmetric IRB), Centralized Gateway (Asymmetric IRB)
Layer 2 Cross-Connect 
   
Overlay Unicast Support   IPv4 and IPv6 Unicast     
Overlay Multicast Support   IPv4 and IPv6 – Tenant Routed Multicast     
Wireless Network Integration   Local Mode – Central Switching
FlexConnect Mode – Central and Distributed Local Switching 
   
Data Center Integration   BGP EVPN VXLAN – Common EN/DC Fabric
Cisco ACI – Nexus 9000 Border Layer 3 Handoff 
   
Multi-site EVPN Domain   Campus Catalyst 9000 switches extending fabric with Nexus 9000 Multi-site Border Gateway integration     
External Domain Handoff   L2: Untag, 802.1Q, 802.1ad, EoMPLS, VPLS
L3: Multi-VRF, MPLS VPN, SD-WAN, GRE 
   
Data Plane load sharing   L3: ECMP  L2: Per flow Port-Channel Hash
L3: ECMP
Multicast:S, G + Next Hop
L2: Per Port-VLAN Load Balancing
L3: EMCP
Multicast: S, G + Next Hop
System Resiliency Cisco StackWise-1T
Cisco StackWise-480
Cisco StackPower
Fast Reload
Stateful Switchover (SSO)
Ext. Fast Software Upgrade
In-Service Software Upgrade (ISSU)
Cisco StackWise Virtual
Stateful Switchover (SSO)
In-Service Software Upgrade (ISSU)
Stateful Switchover (SSO)
In-Service Software Upgrade (ISSU)
Network Resiliency BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion
L2: EtherChannel, UDLD, etc.
BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion
L2: UDLD, etc.
BFD (Single/Multi-Hop)
Graceful Restart
Graceful Insertion

Scalable Architecture Matters


IT organizations adopting the BGP EVPN VXLAN solution must consider how to scale multi-dimensionally when building large-scale fabrics. This demands call-to-action to design the right architecture based on proven principles in the networking world. Regardless of physical or virtual networking, it shall be designed with an appropriate level of hierarchy to support the best-in-class scalable solution supporting a large enterprise network. The smaller fault domains and condensed network topologies in core-layer enable resilient networks are well-known benefits of hierarchical networking.

As the number of EVPN leaf nodes increases overlay prefixes and the blast radius in the network grows. The network architects shall consider building a structured Multi-Site overlay networking solution allowing Enterprise campus to grow by dividing fabric domains in different boundaries and using fabric border gateways to interconnect all together.

Stay tuned we’ll share more thoughts on how Cisco Catalyst 9000 and Nexus 9000 can bring next-generation BGP EVPN VXLAN with Multi-site solutions. And as always, if you are already on the journey to design and build a scalable end-to-end BGP EVPN VXLAN campus network, then simply reach out to your Cisco sales team to partner with you and enable the vision. 

Source: cisco.com

Thursday, 24 March 2022

Why Automation Will Unlock The Power of AI in Networking (Part 1)

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

You have probably heard about the old adage “Correlation does not imply causation”. This idea that one cannot deduce a causal relationship between two events merely because they occur in association has a cool latin name: cum hoc ergo propter hoc (“with this, therefore because of this”), which hints at the fact that this adage is even older than you might think.

What most people don’t know is that all the cool deep learning algorithms out there actually fall prey to this fallacy. No matter how fancy they are, these algorithms merely rely on association, but they have no common sense (which can be thought of as some kind of causal model of the world).

In this article, we will explore a few key ideas around the topics of correlation and causality, and more importantly, why you should care about this and how automation can help us in this regard!

Correlation by chance

If you have an interest in data analytics or statistics, you have probably come across the concept of spurious correlations. This term has been coined by the famous statistician Karl Pearson in the late 19th century, but has been recently popularized by the Spurious Correlations website (and book) by Tyler Vigen, which offers many examples such as this one:

Here we observe that the number of non-commercial space launches in the world happens to match almost perfectly the number of sociology doctorates awarded in the US every year (in terms of relative variation, not in absolute value). These examples are of course meant as jokes, and this makes us laugh because it goes against common sense. There isn’t any connection between space launches and sociology doctorates, so it is pretty clear that something is wrong here.

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

Now, examples such as this one are not exactly what Karl Pearson had in mind when he coined the term, because they are the result of chance rather than a common cause. Instead, we are dealing with a problem of statistical significance: although the correlation coefficient is nearly 79%, this is based only on 13 data points for each series, which makes the possibility of correlation by chance very real. Actually, statisticians have designed tools to compute the probability that two completely independent processes (such as space launches and sociology doctorates) produce data that have a correlation at least as extreme as a given value: statistical testing (in which case this probability is called a p-value). 

I applied a statistical test for the above example (see this notebook if you want to test it yourself and see other examples), and I obtained a p-value of 0.13%. I also tested this result empirically by generating one million random time-series and counting how many such time-series had a correlation with the number of worldwide non-commercial space launches higher than 78.9%. No surprises here, I get roughly 0.13% of my trials falling in that category. This summarized in this figure:

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

One important lesson here is: by searching long enough in a large dataset, you will always find some examples of nicely correlated examples. By no means you should conclude that there is some actual relation between them, let alone some causation!

Correlation due to common causes


Now, you can be in a situation where not only the correlation is high, but the sample count is also high, and statistical testing will be of no help (that is, in the above example, you would never be able to generate a random time-series more correlated than your real data). Yet, you cannot conclude that you are in presence of a real situation of causation!

To illustrate this fact vividly, consider the following (made up) example featuring two processes: process A generates a time-series and process B generates discrete events. A realization of these processes is shown below:

Cisco Certification, Cisco Learning, Cisco Guides, Cisco Preparation, Cisco Skills, Cisco Jobs, Cisco Materials

We observe a systematic build up of time-series A, followed by an event B. For the sake of the illustration, let us assume that we have a very large dataset of such time-series and event data, and they all look pretty much like my diagram. The above example has a correlation of 27.62% and an infinitesimal p-value, which rules out correlation by chance. The build up of A happens prior to the event B, so it seems clear that it is a cause of B, right?

But what if I told you that A represents the number of people observed on a platform in a train station and that B corresponds to the arrival of a train on this platform? Then it all makes sense of course. Passengers accumulate on the platform, the train arrives, and most passengers hop on the train. Does that mean that the passengers cause the train to arrive? Of course not! These processes do not cause each other, but they share a common cause: the timetable!

Source: cisco.com

Tuesday, 22 March 2022

Get Ready for Machine Learning Ops (MLOps)

There are a lot of articles and books about machine learning. Most focus on building and training machine learning models. But there’s another interesting and vitally important component to machine learning: the operations side.

Let’s look into the practice of machine learning ops, or MLOps. Getting a handle on AI/ML adoption now is a key part of preparing for the inevitable growth of machine learning in business apps in the future.

Machine Learning is here now and here to stay

Under the hood of machine learning are well-established concepts and algorithms. Machine learning (ML), artificial intelligence (AI), and deep learning (DL) have already had a huge impact on industries, companies, and how we humans interact with machines. A McKinsey study, The State of AI in 2021, outlines that 56% of all respondents (companies from various regions and industries) report AI adoption in at least one function. The top use-cases are service-operations optimization, AI-based enhancements of products, contact-center automation and product-feature optimization. If your work touches those areas, you’re probably already working with ML. If not, you likely will be soon.

Several Cisco products also use AI and ML. Cisco AI Network Analytics within Cisco DNA Center uses ML technologies to detect critical networking issues, anomalies, and trends for faster troubleshooting. Cisco Webex products have ML-based features like real-time translation and background noise reduction. The cybersecurity analytics software Cisco Secure Network Analytics (Stealthwatch) can detect and respond to advanced threats using a combination of behavioral modeling, multilayered machine learning and global threat intelligence.

The need for MLOps

When you introduce ML-based functions into your applications – whether you build it yourself or bring it in via a product that uses it —  you are opening the door to several new infrastructure components, and you need to be intentional about building your AI or ML infrastructure. You may need domain-specific software, new libraries and databases, maybe new hardware such as GPUs (graphical processing units), etc. Few ML-based functions are small projects, and the first ML projects in a company usually need new infrastructure behind them.

This has been discussed and visualized  in the popular NeurIPS paper, Hidden Technical Debt in Machine Learning Systems, by David Sculley and others in 2015. The paper emphasizes that it’s important to be aware of the ML system as a whole, and not to get tunnel vision and only focus on the actual ML code. Inconsistent data pipelines, unorganized model management, a lack of model performance measurement history, and long testing times for trying newly introduced algorithms can lead to higher costs and delays when creating ML-based applications.

The McKinsey study recommends establishing key practices across the whole ML life cycle to increase productivity, speed, reliability, and to reduce risk. This is exactly where MLOps comes in.

Cisco Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco MLOps, Cisco Machine Learning
Looking at a ML architecture holistically, the ML code is only a small part of the whole system.

Understanding MLOps


Just as the DevOps approach tries to combine software development and IT operations, machine learning operations (MLOps) –  tries to combine data and machine learning engineering with IT or infrastructure operations.

MLOps can be seen as a set of practices which add efficiency and predictability to the design, build phase, deployment, and maintenance of machine learning models. With a defined framework, we can also automate machine learning workflows.

Here’s how to visualize MLOps: After setting the business goals, desired functionality, and requirements, a general machine learning architecture or pipeline can look like this:

Cisco Prep, Cisco Certification, Cisco Career, Cisco Skills, Cisco Jobs, Cisco MLOps, Cisco Machine Learning
A general end-to-end machine learning pipeline.

Infrastructure

The whole machine learning life cycle needs a scalable, efficient and secure infrastructure where separate software components for machine learning can work together. The most important part here is to provide a stable base for CI/CD pipelines of machine learning workflows including its complete toolset which currently is highly heterogenous as you will see further below.

In general, proper configuration management for each component, as well as containerization and orchestration, are key elements for running stable and scalable operations. When dealing with sensitive data, access control mechanisms are highly important to deny access for unauthorized users. You should include logging and monitoring systems where important telemetry data from each component can be stored centrally. And you need to plan where to deploy your components: Cloud-only, hybrid or on-prem. This will also help you determine if you want to invest in buying your own GPUs or move the ML model training into the cloud.

Examples of ML infrastructure components are:

◉ Kubernetes
◉ Public cloud providers
◉ On-premise hardware like Cisco Hyperflex and Unified Computing System.
◉ OpenStack

Data sourcing

Leveraging a stable infrastructure, the ML development process starts with the most important components: data. The data engineer usually needs to collect and extract lots of raw data from multiple data sources and insert it into a destination or data lake (for example, a database). These steps are the data pipeline. The exact process depends on the used components: data sources need to have standardized interfaces to extract the data and stream it or insert it in batches into a data lake. The data can also be processed in motion with streaming computation engines.

Data sourcing examples include:

◉ Stream processing: Apache Kafka, fluentd
◉ Streaming Computation Engine: Apache Spark , Apache Flink
◉ Any databases (relational, non-relational): PostgreSQL, MongoDB, influxDB
◉ Data lake platforms and data warehouses

Data management

If not already pre-processed, this data needs to be cleaned, validated, segmented, and further analyzed before going into feature engineering, where the properties from the raw data are extracted. This is key for the quality of the predicted output and for model performance, and the features have to be aligned with the selected machine learning algorithms. These are critical tasks and rarely quick or easy. Based on a survey from the data science platform Anaconda, data scientists spend around 45% of their time on data management tasks. They spend just around 22% of their time on model building, training, and evaluation.

Data processing should be automated as much as possible. There should be sufficient centralized tools available for data versioning, data labeling and feature engineering.

Data management examples:

◉ Data version control: Pachyderm
◉ Feature storage: Feast
◉ Data Exploration: Pandas
◉ Data labeling (for images): CVAT

ML model development

The next step is to build, train, and evaluate the model, before pushing it out to production. It is crucial to automate and standardize this step, too. The best case would be a proper model management system or registry which features the model version, performance, and other parameters. It is very important to keep track of the metadata of each trained and tested ML model so that ML engineers can test and evaluate ML code more quickly.

It’s also important to have a systematic approach, as data will change over time. The previously selected data features may have to be adapted during this process in order to be aligned with the ML model. As a result, the data features and ML models need to be updated and this again will trigger a restart of the process. Therefore, the overall goal is to get feedback of the impact of their code changes without many manual process steps.

ML model development examples:

◉ ML frameworks: Tensorflow, PyTorch, Keras
◉ Notebook / code management: Jupyter
◉ Model management: Kubeflow
◉ Experiment tracking: mlflow, Tensorboard

Production

The last step in the cycle is the deployment of the trained ML model, where the inference happens. This process will provide the desired output of the problem which was stated in the business goals defined at project start.

How to deploy and use the ML model in production depends on the actual implementation. A popular method is to create a web service around it. In this step it is very important to automate the process with a proper CD pipeline. Furthermore, it’s crucial to keep track of the model’s performance in production, and its resource usage. Load balancing also needs to be engineered for the production installation of the application.

ML production examples:

◉ Model serving: BentoML, KServe, Seldon core
◉ Model observability: Evidently
◉ Logging & monitoring: Grafana, Prometheus

Where to go from here?


Ideally, the project will use a combined toolset or framework across the whole machine learning life cycle. What this framework looks like depends on business requirements, application size, and the maturity of ML-based projects used by the application.

Source: cisco.com

Sunday, 20 March 2022

Private 5G Delivered on Your Terms

SP360: Service Provider, Featured, IOT, 5G, Service Provider, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco 5G

Private 5G is a hot topic as enterprises seek industrial wireless IoT solutions to modernize their business for increased productivity and efficiency. In newly emerging cases, wired solutions are not enough, such as in sectors like hospitality where “protected buildings” limit running new cables. For manufacturing and other industries, critical processes like robotic assembly of essential parts (jet turbines, automotive transmissions, or medical devices) and autonomously guided vehicles need a very low-latency, high-reliability solution like private 5G, particularly when those processes co-exist with humans.

On Feb. 3, 2022, we introduced Cisco Private 5G as part of “The Network. Powering Hybrid Work” launch. During this event, we shared our view that the future of hybrid work expands beyond people collaborating with people and now includes people collaborating with things. We now begin to share many attractive use cases for introducing private 5G alongside Wi-Fi into the enterprise networks. As we move towards Mobile World Congress (MWC) at the end of February, we’ll reveal more about our private 5G go-to-market strategies and discuss exciting new opportunities for our global service provider partners.

Connecting everyone and everything


Wireless networking and IoT will transform industries by digitalizing Operational Technology (OT) just as profoundly as the cloud transformed Information Technology (IT). And enterprises are already waiting in anticipation, with a 2021 GSMA Intelligence market report showing that a combination of digital transformation and labor shortages is expected to see enterprise IoT connections quadruple to 23.6 billion by 2030, accounting for 63 percent of total IoT connections. With all the pieces in place, companies with a strategy to converge their IT and OT operations will experience significant gains in productivity and efficiency, creating a major competitive advantage.

With the convergence of IT and OT, hybrid work becomes about connecting everyone and everything. Delivering IoT at scale is just as important as connecting people, allowing hybrid workers to gain access to sensors, monitors, robots, and more. Our vision of the future of work is built on wireless through a combination of private 5G and Wi-Fi, where enterprises can modernize, automate their operations, and benefit from the resulting productivity gains.

But making the change is not easy. There are all kinds of confusing options right now, so where do you begin? We can help by delivering a private 5G solution on your terms.

What separates Cisco Private 5G from the rest?


We believe the competitors are going about it the wrong way. They would have you adopt a complex, carrier-centric 5G solution that’s radically different from what you already know and use. Some even ignore Wi-Fi entirely. As the top enterprise networking, wireless, security, Industrial IoT, and collaboration IT vendor, we know how to build a solution that fits your enterprise needs, where Cisco Private 5G is integrated with Wi-Fi and existing IT operations environments. This makes your transformation easy, and we’re the only vendor to empower enterprise customers to extend what they already own and understand into new possibilities.

SP360: Service Provider, Featured, IOT, 5G, Service Provider, Cisco Exam Prep, Cisco Career, Cisco Skills, Cisco Jobs, Cisco 5G

We know the many different technology choices and complexity of operating such an environment can make it difficult to start. It’s hard to commit financially to a new technology with so many uncertainties. Even the most visionary business leaders may hesitate to avoid making a wrong decision. With Cisco as your partner, you can feel confident you’ve made the right choice because our private 5G solution is ‘Simple to Start’, ‘Intuitive to Operate’, and ‘Trusted’ for enterprise digital transformation.

Simple to start

◉ The journey begins with a qualified business consultation.

◉ You don’t have to choose between 5G and Wi-Fi – you can use both, protecting your current investments and strategies.

◉ With your business goals in hand, a premium partner will perform a site survey to scope the necessary networking and radio coverage to support the intended IoT use case(s).

◉ Cisco Private 5G networks will be Cisco Validated Designs (CVD).

◉ Our “pay-as-you-use” subscription model means that you and your deployment partners will have minimal up-front infrastructure costs, so no matter how small the start or how massive the goal, costs remain in line with value. By comparison, traditional purchasing models force you to “spend a lot and wait” for productivity or profitability.

Intuitive to operate

◉ A simple management portal integrates and aligns with existing enterprise tools. We handle all the complexities of the 3GPP mobile network stack.

◉ Enterprise IT teams get a complete picture of their network and devices. You can maintain policy and identity across wired and wireless network domains for simplified operations.

◉ AI/ML-based management tools can identify unexpected behavior patterns and potential issues, making it easy to proactively take intelligent actions. Intelligent analytics increase effectiveness, minimize exposure time and reduce damage.

◉ Many problems in the network stem from outdated software, and nearly all are avoidable. As a continuously improving service, our private 5G software releases are automatically maintained from the cloud, ensuring the latest functions and security updates are in place.

Trusted

◉ As the No. 1 provider for connectivity, collaboration, industrial IoT, and IoT-connected cars, enterprises trust our technology, products, and services.

◉ Cloud-native architecture allows Cisco Private 5G to flexibly support different deployment models. Components may reside in the cloud, distributed edge, or on premises depending on needs for extra reliability or data privacy.

Source: cisco.com