Showing posts with label Cisco Digital Network Architecture. Show all posts
Showing posts with label Cisco Digital Network Architecture. Show all posts

Friday, 30 November 2018

AI Ops and the Self-Optimization of Resources

Cisco Study Material, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Live

AI Ops includes the ability to dynamically optimize infrastructure resources through a holistic approach. Cisco Workload Optimization Manager is an important component in our strategy of delivering enhanced customer benefits through AI Ops.

Our Strategy for Delivering the Benefits of AI Ops


Cisco is executing a strategy to consistently enhance the customer benefits we deliver through AI-driven Operations (AI Ops). This blog is the latest in a series that describes our strategy, our open architecture, and how we are implementing each of the benefits. In the first blog in this series we defined four categories of benefits from AI Ops:

1. Improved user experience
2. Proactive support and maintenance
3. Self-optimization of resources
4. Predictive operational analytics

Multi-Dimensional AI Ops Strategy


Vendors use the terms AI, machine learning and AI Ops in a variety of ways. Their focus is primarily on hardware. Our strategy for delivering the customer benefits of AI Ops is a broader architectural vision. This vision includes infrastructure, workloads, and enhanced customer support in on-premises and cloud environments. Cisco’s strategy incorporates an open API framework and integrations with Cisco and partner platforms.

Infrastructure management is one dimension of AI Ops, and Cisco Intersight is an integral component of Cisco’s strategy. Managing workloads is another essential dimension, so Cisco Workload Optimization Manager (CWOM) is also an important component of this strategy.

AI Ops Portfolio Working Together


In a prior blog we explained how Intersight delivers an AI-driven user experience through our open API framework. We posted two blogs in this series to explain how Intersight delivers benefit #2, AI-driven proactive support and proactive maintenance. The proactive support is enabled through the Intersight integration with the Cisco service desk digital intelligence platform. This AI platform (internally referred to as BORG) is  used by the Cisco Technical Assistance Center. It includes AI, analytics, and machine learning. In this blog, I explain how we deliver benefit #3, the self-optimization of resources, through monitoring and automation with Cisco Workload Optimization Manager.

Self-Optimization of Resources


The self-optimization of resources includes both on-premises and public cloud infrastructure. You need to monitor and automate across a variety of virtualized environments, containers and microservices.

In order to ensure that your applications continuously perform, and your IT resources are fully optimized, you need full visibility across compute infrastructure and applications, across networks and clouds…. and you need all this intelligence at your fingertips, so you can quickly and easily make the right decisions, in real-time to assure application performance, operate efficiently and maintain compliance in your IT environment.

Cisco Workload Optimization Manager is an AI-powered platform that delivers this functionality through integrations with Cisco’s multicloud portfolio, ACI, UCS management, HyperFlex, and a broad ecosystem of partner solutions that will continue to grow over time.  CWOM continuously analyzes workload consumption, costs and compliance constraints and automatically allocates resources in real-time.

How Does AI Ops Work?


Resource allocation, workload scheduling and load balancing are concepts that have been critical to efficient IT operations for decades. Workload Optimization Manager uses AI and advanced algorithms to manage complex multicloud environments. It views on-premises resources and the cloud stack as a supply chain of buyers and sellers. CWOM looks for the options for running workloads and manages the resources as “just in time” supply to cost-effectively support workload demands, helping customers maintain a continuous state of application health.

Cisco Study Material, Cisco Learning, Cisco Guides, Cisco Tutorial and Material, Cisco Live
CWOM showing cost analysis of pending actions

Many AI Ops solutions are complex to deploy, and they require require a significant amount of time to accumulate information before they can be effective for analysis. Workload Optimization Manager is easy to install, and the agentless technology will instantly begin to detect all the elements in your environment from applications to individual components. The unique decision engine curates workload demand, so it can generate faster, accurate recommendations after collecting data for a short period of time. CWOM uses three categories of functionality to optimize the use of available resources:

Abstraction: All workloads (applications, VMs, containers) and infrastructure resources (compute, storage, network, fabric, etc.) are abstracted into a common data model, creating a “market” of buyers and sellers of resources.

Analysis: A decision engine applies the principles of supply, demand, and price to the market. There are costs associated with on-premises infrastructure resources, and cloud providers price their resources based on utilization levels. The analytics ensure the right resource decisions are made at the right time.

Automation: Workloads are precisely resourced, automatically, to optimize performance, compliance and cost in real-time. The workloads become self-managing anywhere, spanning on-premises to public cloud environments.

These combined capabilities enable IT to assure application performance, at the lowest cost, while maintaining compliance with policy – from the data center to the public cloud and edge.

Wednesday, 28 November 2018

Accelerating Enterprise AI with Network Architecture Search

AI/ML is a dominant trend in the enterprise. While AI/ML is not fundamentally new, the ubiquity of large amounts of observed data, the rise of distributed computing frameworks and the prevalence of large hardware-accelerated computing infrastructure has lead to new wave of breakthroughs in AI in the last 5 years or so. Today enterprises are rushing to apply AI in every part of the organization for a wide range of task, from making better decisions, to optimizing their processes.

However, to reap the benefit of AI, one needs significant investments into teams who understand the entire AI lifecycle, especially how to understand, design and tune the mathematical models that apply to their use cases. Often these models use bespoke techniques that are known to a select few who are highly trained in the field. Without this tuning, an enterprise can spend lots of opex running models by following the canonical models. How can we help the enterprise accelerate this step? One way is AutoML

AutoML is a broad class of techniques that help to solve the pain of iterative designing and tuning of models without the personnel investment. It ranges from tuning an existing model (e.g. in hyper parameter search) to designing new network models automatically. For those leveraging Deep Learning, one way is to use Neural Architecture Search (NAS), which aims to find the best neural network topology for a given task, automatically.

In recent years, several automated NAC  methods have been proposed using techniques such as evolutionary algorithms and reinforcement learning. These methods have found neural network architectures that outperform bespoke, human designed architectures on problems such as image classification and language modeling and have improved the state of the art on accuracy.  However, these methods have been largely limited by the resources needed to search for the best architecture.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

We present a method for NAS called Neural Architecture Construction (NAC) – it is a automated method to construct deep network architectures with close to state of art accuracy, in less than 1 GPU day — faster than current state of the art neural architecture search methods.  NAC works by pruning and expansion of a small base network called an EnvelopeNet. It runs a truncated training cycle and compares the utility of different network blocks and prunes and expands the base network based on these statistics.  Most conventional neural architecture search methods iterate through a full training cycle of a number of intermediate networks, comparing their accuracy, before discovering a final network. The time needed to discover the final network is limited by the need to run a full training and evaluation cycle on each intermediate network generated, resulting in large search times. In contrast, NAC speeds up the construction process because the pruning and expansion can be done without needing to wait for a full training cycle to complete.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 1: Results comparing our NAC with other state of the art work. Note the search time for both the dataset. The NAC numbers for ImageNet are preliminary.

Interestingly, our NAC algorithm mirrors theories on the ontogenesis of neurons in the brain. Brain development is believed to consist of neurogenesis, where the neural structure initially develops, gradually followed by apoptosis, where neural cells are eliminated, hippocampal neurogenesis, where more neurons are introduced, and synaptic pruning, where synapses are eliminated. Our NAC algorithm consists of analogous steps run in iterations: model initialization with a prior (neurogenesis), a truncated training cycle, pruning filters (apoptosis), adding new cells (hippocampal neurogenesis), and pruning of skip connections (synaptic pruning). Artificial neurogenesis has been previously studied as, among others, a method for continuous learning in neural networks.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

We also open sourced a tool called AMLA, an Automated Machine Learning frAmework for implementing and deploying neural architecture search algorithms.  AMLA is designed to deploy these algorithms at scale and allow comparison of the performance of the networks generated by different AutoML algorithms. Its key architectural features are the decoupling of the network generation from the network evaluation, support for network instrumentation, open model specification, and a microservices based architecture for deployment at scale. In AMLA, AutoML algorithms and training/evaluation code are written as containerized microservices that can be deployed at scale on a public or private infrastructure. The microservices communicate via well defined interfaces and models are persisted using standard model definition formats, allowing the plug and play of the AutoML algorithms as well as the AI/ML libraries. This makes it easy to prototype, compare, benchmark, and deploy different AutoML algorithms in production.

To help users incorporate NAS into their regular AI/ML workflows, we are working on integrating our NAS efforts into Kubeflow, an opensource platform to simplify the management of AI/ML lifecycles on Kubernetes based infrastructure. Once integrated, these NAS tools will help users optimize network architectures in addition to hyper parameter optimization (e.g. Katib tool within Kubeflow).

We believe that this is just the tip of the iceberg (of AutoML and NAS in particular). However these early results have given us confidence that we can design better mechanisms for AutoML that require less resources to operate, in a step towards accelerating the adoption of AI in the enterprise.

Wednesday, 27 December 2017

Thank You Cisco ISR G2 2900 and 3900 Series Routers

Over the past 9 years, the Integrated Services Router (ISR) G2 router portfolio has helped tens of thousands of Enterprise and Service Provider customers build, secure, grow, and transform their businesses. It has been the most successful router product line in the history of branch networking.

Friday, 7 July 2017

Cisco Introduces a New Era in Networking, Powered by Software Innovation and Subscription Buying

Introduction of Cisco DNA Advantage and Essentials subscription-based network software will transform how infrastructure software is bought and deployed in the new era of networking. These latest innovations reinforce Cisco’s commitment to transition its business to a software-centric, subscription-based model, while helping accelerate our customers’ digital transformation.