Friday 12 October 2018

Meraki Wireless Health APIs Make Network Assurance Easier

As Meraki continues to drive cloud managed networking into new markets, we continue to evolve our offerings to help customers and partners adopt this mission. With large enterprise, campuses, and service providers all rapidly growing Meraki wireless deployments, Meraki continues to rapidly evolve to drive innovation in these markets. As part of the strategy to make our rich data sources available to our customers, we introduced Meraki Wireless Health and a brand new product line Meraki Insight at Cisco Live in Barcelona 2018.

In addition to our rapid adoption of Meraki Wireless Health, Meraki’s API platform has also been experiencing rapid adoption by our customer and partner community. Since the introduction of APIs less than 2 years ago, the API platform has grown into 100’s of unique APIs and over 20 million API calls a day

Access all Meraki Wireless Health features via an API interface


Wireless Health has been an instant hit with hundreds of thousands of customers now actively using Meraki Wireless Health, we have maintained our focus on wireless health to build out additional value for our customers. To build on top of all of these successes, Meraki is proud to announce that we are now launching full Wireless Health API endpoints. This is one of our largest API launches to date, and all of our customers now have access. These new API endpoints will make it easy for both Meraki’s Partner Solutions team and Cisco’s DevNet team to drive simple Open Source and Partner Solutions that can help simplify the management of wireless deployments of all types and for all verticals.

With this launch we are creating 3 key types of API endpoints


1. Connection Health – Summarizes the connection health of a network, AP or client
2. Connection Failures – Returns a full list of association, authentication, DHCP and DNS issues
3. Network Latency – Summarizes the latency of a network, AP or client.

The great thing about these new API endpoints is we have designed them to be flexible. You can filter all of them by a specific VLAN or SSID. You can also call the summary statistics by using start time and end times.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Leverage our DevNet community to build your own customized analysis or visualization

Using the new APIs to create working solutions


During the development of the Wireless Health API endpoints, Meraki worked with DevNet to validate the format of the API endpoints and make them as robust as possible. Our teams also worked together to create a real working solution using the new API endpoints. With the experienced DevNet team working with the Meraki team, we were able to put together a full working demonstration of the new API endpoints within one week. Now that Meraki APIs are part of Cisco DevNet we now have access to over 500,000 DevNet developers who can create services and solutions based on the Meraki APIs.

Wireless infrastructure supports mobile POS devices


Together we created a solution that allows one of our retail customers to correlate how network health, customer foot traffic, and point of sale statistics all interact – and do it in a single dashboard for over 50 separate locations! With mobile point of sale (POS) devices becoming a predominant method of in-store payments, the wireless infrastructure has become more important than ever. So we wanted to create a dashboard where our customers could quickly check the overall current health of the retail store, but also (thanks to some open source engineering) see the historical health. We also worked with the customer to create an overall health score for their retail stores, so that they can roll up the disparate and complex data set into a single scoring rank. This unleashes an incredibly powerful data point, where the retailer can quickly identify poorly performing retail locations and within seconds dive in and investigate the root cause.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Complete visibility into the retail locations data using Kibana

Unleash the power of your data in the Meraki platform


This is just the beginning of Meraki unleashing the power of the data in our platform. In the months ahead, we will continue to release more network health metrics that will further streamline the running of enterprise networks. We are looking forward to all the great things our customers and partners are going to create with these newly introduced API endpoints, and our team will continue to leverage our agile development environment to drive more innovations in the networking space.

Since 2017 the Cisco Meraki team has been working with the Cisco DevNet team. As a result, over the last year a large number of partners have leveraged our API infrastructure to create solutions specific to unique use cases. We are constantly adding new solutions and partners to our ecosystem, and they are easily available for anyone to view.

Wednesday 10 October 2018

Challenge Your Inner Hybrid Creativity with Cisco and Google Cloud

In recent years, Kubernetes has risen up in popularity, especially with the developer community. And why do developers love Kubernetes? Because it offers incredible potential for speed, consistency, and flexibility for managing containers. But containers are not all sunshine and roses for enterprises – with big benefits come some big challenges. Nobody loves deploying, monitoring, and managing container lifecycles, especially across multiple public and private clouds. On top of that, there are many choices when it comes to environments, which can also create a lot of complexity – there are simply too many tools and too little standardization.

Production grade container environments powered by Kubernetes


That’s why earlier this year Cisco launched the Cisco Container Platform, a turnkey-solution for production grade container environments powered by Kubernetes. The Cisco Container Platform automates the repetitive functions and simplifies the complex ones so everyone can go back to enjoying the magic of containers. The Cisco Container Platform is a key element of Cisco’s overall container strategy and another way Cisco provides our customers with choices to various public clouds.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 1: Cisco Hybrid Cloud for Google Cloud

Hybrid cloud applications are the next big thing for developers


At the beginning of the year Cisco joined forces with Google Cloud on a hybrid cloud offering that, among other things, allows enterprises to deploy Kubernetes-based containers on-premises and securely connect with Google Cloud Platform.

In July at Google Cloud Next ’18, we kicked off the Cisco & Google Cloud Challenge.  (You still have until November 1, 2018 to enter the challenge and win prizes.) The idea behind it is to give developers a window into the possibilities for building hybrid cloud applications. Hybrid cloud applications are the next frontier for developers. There are so many innovation possibilities for the hybrid cloud infrastructure. That’s why we even named it “Two Clouds, infinite possibilities.”

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 2: Timeline for the Cisco & Google Cloud Challenge

An IoT edge use case for inspiration


Consider the following use case –assume we have a factory which generates a huge amount of data from sensors deployed across the physical building. We would like to analyze that data on-premises, but take advantage of cloud services in Google Cloud Platform for further analysis. This could include running predictive analysis with Machine Learning (ML) on that data (i.e., which machine part is going to break next). “Edge” here represents a generic class of use cases with these characteristics:

◈ Limited Network Bandwidth – Many manufacturing environments are remote, with limited bandwidth. Collecting data from hundreds of thousands of devices requires processing, buffering, and storage at the edge when bandwidth is limited. For instance, an offshore oil rig collects more than 50,000 data points per second, but less than 1% of this can be used in business decision making due to bandwidth constraints. Instead, analytics and logic can be applied at the edge, and summary decisions rolled up to the cloud.

◈ Data Separation & Partitioning – Often data from a single source needs to go to different and/or multiple locations or cloud services for analytics processing. Parsing the data at the edge to identify its final destination based on the desired analytics outcome allows you to route data more effectively, lower cloud costs and management overhead, and provide for the ability to route data based on compliance or data sovereignty needs. For example sending PCI, PII, or GDPR classified data to one cloud or service, while device or telemetry data routes to others. Additionally, data pre-processing can occur at the edge to munge data such as time series formats into aggregate, reducing complexity in the cloud.

◈ Data Filtering – Most data just isn’t interesting. But you don’t know that until you’ve received it at a cloud service and decide to drop it on the floor. For example, fire alarms send the most boring data 99.999% of the time. Until they send data that is incredibly important! There is often no need to store or forward this data until it is relevant to your business. Additionally, many data scientists now desire to run individually trained models at the edge, and if data no longer fits that model or is an exception, to send the entire data set to the cloud for re-training. Filtering with complex models also allows intelligent filtering at the edge that support edge decision making.

◈ Edge Decision Making & Model Training – Training and storing ML models directly at the edge allows storing ephemeral models that may otherwise not be possible due to compliance or data sovereignty requirements. These models can act on ephemeral data that is not stored or forwarded, but still garner information and outcomes that can then be sent to centralized locations. Alternatively, models can be trained centrally in the cloud and pushed to the edge to perform any of the other listed edge functions. And when data no longer fits that model (such as collecting long tail time-series data) the entire data set can be aggregated to the cloud for retraining, and the model re-deployed to the edge endpoints.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 3: Hybrid Cloud, Edge Compute Use-case

As a real-life example, here in Cisco DevNet, we developed a use-case for doing Object Recognition using video streams from IP cameras. The video gateway at the edge analyzed the video streams in real-time, did object detection at the edge and passed the object to the Cisco Container Platform which further did object recognition. The recognized object, and all the associated meta-data, were stored at this layer. An application to query this data was written in the public cloud to track the path of the object.

Give the Cisco & Google Cloud Challenge a try


There’s no doubt about the popularity of Kubernetes in the developer community. Cisco Hybrid Cloud Platform for Google Cloud takes away the complexity of managing private clusters and lets developers concentrate on the things they want to innovate on. Start with our DevNet Sandbox for CCP, reserve your instance and test-drive it for yourself.

The Cisco & Google Cloud Challenge is an awesome way to brainstorm and solve some real customer problems and even win some prizes while you are at it. So, consider this blog as me inviting everyone to give the Challenge a try, and wishing you the very best! You have until Nov 1, 2018 to enter the challenge and win prizes.

Saturday 6 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 2

In part 1 of this blog series I talked about how data processing landscapes are getting more complex and heterogeneous creating roadblocks for customer who want to adopt truly hybrid cloud data applications. In the beginning of this year, Cisco and SAP decided to join forces and to bring the SAP Data Hub to the Cisco Container Platform. The goal is to provide a real end-to-end solution to help customers tackle the challenges described above and enable them to become a successful intelligent enterprise. We are focusing on providing a turn-key enterprise-scale solution that fosters a seamless interplay of powerful hardware and sophisticated software.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 1 Unified data integration and orchestration for enterprise data landscapes.

SAP brings into the game its novel data orchestration and refinery solution ‘SAP Data Hub’. The solution brings a number of features that allow customers to manage and process data in complex data landscapes involving on-premise systems and across multiple clouds. SAP Data Hub supports connecting the different systems in a landscape to a central hub to gain a first overview of all systems involved in data processing within a company. Above that the Data Hub is able to scan, profile and crawl those sources to retrieve the metadata and characteristics of the data stored in those sources. With that the SAP Data Hub provides a holistic data landscape overview in a central catalog and allows companies to answer the central questions about data positioning and governance.

Furthermore, the SAP Data Hub allows the definition of data pipelines that allow a data processing and landscape orchestration across all connected systems. Data pipelines consist of operators—small independent computation units—that form a joint computation graph. The functionality an operator provides can reach from very simple read operations and transformations (e.g. change the date format from US to EU), over interacting with a connected system, towards invoking a complex machine learning model. The operators are invoking their functionality and applying their transformations, while the data flows through the defined pipeline. This kind of data processing changes the paradigm of static, transactional ETL processes to a more dynamic flow-based data processing model.

With all of this functionality, we kept in mind that for being successful in bridging enterprise data and big data, we need to be open with respect to connecting not only SAP enterprise systems, but common systems used within the Big Data space (compare Figure 2). For this purpose, the SAP Data Hub is focusing on an open connectivity paradigm providing a huge number of connectors to different kinds of cloud and on-premise data management systems fostering the integration between enterprise data and big data.

All of that makes the SAP Data Hub a powerful enterprise application that allows customer to orchestrate and manage their complex system landscape. However, a solution like the Data Hub would be nothing without a powerful and flexible platform. Customers are increasingly turning towards containerized applications and Kubernetes as the orchestrator of choice, to handle the requirements to efficiently process large volumes of data. For this reason, it was a clear decision to move the SAP Data Hub also in this direction. The SAP Data Hub is completely containerized and uses Kubernetes as its platform and foundation.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 2 SAP & Cisco delivering turn-key solutions for complex enterprise data landscapes.

This is where Cisco with its advanced Cisco Container Platform (CCP) on its hyperconverged hardware solution Cisco Hyperflex comes into the game. Providing elastically scalable container clusters as a single turnkey solution covering on-premise and cloud environments with a single infrastructure stack is key for enterprise customer involved in big data analytics. With the Cisco Container Platform on Hyperflex 3.0 Cisco offers a fully integrated and flexible ‘container as a service’ offering with lifecycle management for hardware and software components. It provides a 100% upstream Kubernetes with integrated networking, system management and security. In addition, it utilizes modern technologies such as ‘istio’ and ‘Cloud Connect VPN’ to efficiently bridge on-premise and cloud services from different cloud providers. Accordingly, it accelerates a cloud-native transformation and application delivery in hybrid cloud enterprise environments, clearly embracing the multi-cloud world and helping to solve the multi-cloud challenges. Furthermore, the CCP allows to monitor the entire hardware and Kubernetes platform to allow customers to identify issues and non-beneficial usage patterns pro-actively and troubleshoot container clusters with fast pace.

Accordingly, the CCP is the perfect foundation for deploying the SAP Data Hub in complex, multi-cloud and hybrid cloud customer landscapes. We complemented the solution with Scality Ring an enterprise-ready scale-out file and object storage that fulfills major characteristics for production-ready usage; e.g. guaranteed reliability, availability and durability. This adds a data lake to the on-premise solution allowing price-efficient storage for mass data. In addition, we added network traffic load balancing with the advanced AVI Networks load balancers. They provide intelligent automation and monitoring for improved routing decisions. Both additions greatly benefit the CCP and complete it towards a full big data management and processing foundation.

With the release of the SAP Data Hub on the Cisco Container Platform running on Hyperflex 3.0 and complemented with Scality Ring and AVI Networks load balancers during SAP TechEd Las Vegas, customers will have the option to receive a turn-key, full-stack solution to tackle the challenges of modern enterprise data landscapes. They can start fast, they remain flexible and they receive full-stack support from Cisco’s world class engineering support and SAP’s advanced support services. Accordingly, SAP and Cisco together enable customers to win the race for the best data processing in the digital economy.

Friday 5 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 1

The journey towards the intelligent enterprise


When talking about modern data processing in the digital economy, data is often regarded as the new oil. Enterprise companies are already competing in a race for the best mining, extraction and processing technologies to gain better insights into their companies, deals and processes. Winning in this race will ultimately lead to a competitive advantage in the market, since companies with a deep understanding for their businesses will be able to take the most profitable decisions and establish the most beneficial optimizations. For this reason, the way companies are handling data and analytics is changing from pure transactional, ETL-like processing towards adopting modern technologies such as machine learning, intelligent analytics, stream processing on-premise and in the cloud. We regard to this transition as the move towards the ‘intelligent enterprise’.

However, the transition also leads to the fact that data processing landscapes are getting more complex and more heterogeneous. Data is processed in a growing collection of different system, it is distributed over different places; its volume is growing by the day and customers need to orchestrate and integrate cloud technologies with classical on-premise systems. Among all the challenges that come with the journey towards digitalization and the ‘intelligent enterprise’ the following three main challenges emerge as the most pressing points in our customer base:

Cisco Certifications, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Figure 1 Enterprise data landscapes are growing increasingly complex.

1. The tale about data governance and the lack of data knowledge, security and visibility 


One of the biggest challenges in complex modern enterprise data landscape is the distribution of the data over a growing number of stores and processing systems. This leads to a missing knowledge about data positioning, data characteristics and governance. “What data is available in which store?”, “What are the major characteristics of my data sets?”, “Who changed the data, in what way and who has the permissions to access it?” are typical questions that are hard to answer even within a single company. However, it is key to find a strategy that allows a holistic data governance and data management across the entire company.

2. The legend about enterprise readiness of big data technologies 


In the world of modern data processing technologies and big data management, we observe an incredible growth in tools and technologies a customer can choose from. While in the first place, choice seems to be an advantage, one quickly recognizes that this leads to a zoo of non-integrated system each exhibiting different characteristics, life cycles and environments. It is left to the customer to manage, organize and orchestrate those systems leading to a very high effort to arrive at an enterprise ready data landscape with a well-organized interplay of all components.

3. The story about easily processing enterprise data and big data together 


The adoption of modern big data technologies mainly comes from the fact that augmenting classical enterprise data such as sales figures and revenue data with big data such as sensor streams, social media data collections or mobile device data, allows to create deeper and advanced analyses. However, in most cases enterprise data and big data are kept in different silos and exhibit totally different characteristics. Enterprise data typically comes from classical transactional systems such as ERP systems or transactional data bases, it is well structured and adheres a standardized schema. On the other hand, big data often arrives in its pure form as data streams or data collections stored in data lakes (e.g. Hadoop, S3, GCS). It is often unstructured, misses clear data types and might not adhere to a clear data schema. Accordingly, creating an end-to-end data pipeline across the enterprise that combines business data with big data comes with a considerable effort.

SAP and Cisco jointly recognize that our mutual customers need innovative new solutions that would help them overcome these hurdles in order to fully leverage the value of their distributed data and turn them into actionable insights.

Sunday 30 September 2018

Curated Code Repos Get Your Integration Project Done Faster, Better

Having a hard time getting started on your next big integration with Cisco products? Found the platform and API docs on DevNet but need help turning this into running code? Check out Code Exchange, one more way DevNet makes it easy for developers to be successful with Cisco products and platforms.

Code Exchange is an online, curated set of code repositories that help you develop applications with/on Cisco platforms and APIs. Inside Code Exchange, you will find hundreds of code repositories – code created and maintained by Cisco engineering teams, ecosystem partners, technology and open source communities, and individual developers. Anyone can use this code to jumpstart their app development with Cisco platforms, products, application programming interfaces (APIs), and software development kits (SDKs).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Curated for quality


There is a large and growing amount of sample code and applications, helpful tools and libraries, and open source projects related to Cisco technologies on GitHub. However, finding up-to-date content best suited for your immediate needs can be difficult. Code Exchange helps you tackle this challenge.

To get things started, our team of DevNet Developer Advocates identified candidate repositories using GitHub crawlers and an algorithm that scores repositories based on a number of criteria. We then reviewed top repos to make sure they are in good shape and of general interest to the DevNet community. While we do not actively maintain all of the code, we provide confirmation that the code is a worthwhile investment of your time before accepting it into Code Exchange.

Simple filters for technology space and programming language may be used independently or in combination with keywords you provide to zero in on the set of repos most relevant to your immediate needs. Want more guidance? Sorting by those most recommended by DevNet Developer Avocates, or the date the repo was last updated, presents you with the best and brightest projects.

Key Features:

1. Curated view of code repositories related to all Cisco platforms
2. Easy discoverability using filters and search features
3. Link to repository on GitHub for direct access to code and contributors

For example, let’s say you are looking for some sample code written in Python for automation of Cisco IOS XR platforms using APIs defined by native and standard YANG models. Simply enter that in the Code Exchange search field and filters and back comes a set of highly relevant resources.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Or perhaps you’re looking for Javascript for an integration with Cisco’s collaboration platforms? Let Code Exchange do the heavy lifting.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Community contributions provide even more options


At present, the majority of the code in Code Exchange comes from GitHub organizations managed by employees at Cisco. These include some obvious ones, such as Cisco DevNet, Cisco, Cisco Systems, and Cisco SE, as well as others that are less obvious at first glance, such as Talos, IOS-XR, and Meraki.

That said, we realize, and very much appreciate, that a huge amount of very useful code for working with Cisco technologies exists throughout the community at large. We encourage and welcome contributions to Code Exchange from the entire DevNet community, including code in your personal GitHub account.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

Follow these base requirements to prepare your GitHub repositories related to Cisco technologies:

1. Include a LICENSE in the repository
2. Add a clear README
3. Ensure repository is publicly available
4. Show evidence of repository being maintained

Then fill out the form and DevNet Developer Advocates will take a look!

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material

How to make sure your repo is accepted


What better way to get your application, your company, your name out in the community working with Cisco products and APIs than to have your repo featured in Code Exchange? In addition to meeting the requirements enforced by the submission form, there are several things you can do to help us realize how great your code is and gladly accept it into Code Exchange.

Your README should provide new users with all the information they need to understand what your repo contains, including a getting started section with step by step instructions for how to install, run, and/or use it, and where to turn to get answers or to provide feedback. Your README will show best in Code Exchange if written in Markdown (i.e. README.md). We are in the process of adding support for reStructured Text (i.e. README.rst).

It is also highly recommended to include a CONTRIBUTING.md file that outlines how best to contribute back to the project by reporting issues, fixing bugs, adding new functionality, etc. Is it best to fork the project and send a pull request? Should an issue be opened first? What if I simply want to ask a question? Make it clear and easy for others to not only use your code but also help make it better.

Tips for enhancing the discoverability of your repos


At the time of project submission, you can identify the set of technologies to which your code is related. Identifying all and only those that truly apply is very helpful. Equally important is to add a meaningful description and GitHub topics. The search functionality of Code Exchange relies on these as well as first and second level headings in your README.

Friday 28 September 2018

Cisco Intersight: AI-Driven IT Operations Strategy

Cisco launched our cloud-based platform for AI-driven IT operations (a.k.a. AI Ops), Cisco Intersight, last September. It already delivers significant benefits, and we intend to take it to the next level with artificial intelligence and machine learning.

A New Era in Operations Management


“Work smarter, not harder” is critical to improving IT efficiency. Organizations are adopting a multicloud strategy, so you need scalable and consistent management across data centers, private clouds, edge, and branch environments. Cisco Intersight delivers this consistent management, automation and policy enforcement across a variety of servers and hyperconverged infrastructure (HCI). It helps you work smarter by delivering proactive support and actionable intelligence through artificial intelligence (AI) and machine learning (ML), so that you can proactively manage complex environments and reduce risk.

Traditional IT operations management tools are deployed on-premise, they are vendor and device focused, difficult to maintain, and have limited ability to scale. We introduced a new era of systems management with Cisco Intersight. It provides the simplicity of software as a service (SaaS) with unlimited scalability. Intersight is enhanced by AI and ML to provide users with actionable intelligence.

Artificial Intelligence, Cisco Hyperflex, Cisco UCS, Cisco Intersight, Machine Learning, API

Requirements for AI Operations


Gartner introduced the concept of AI Operations (AI Ops) a few years ago. AI Ops should not be confused with AI systems, like the Cisco UCS C480 ML M5 we announced earlier this month. This definition:

AIOps platforms utilize big data, modern machine learning and other advanced analytics technologies to directly and indirectly enhance IT operations (monitoring, automation and service desk) functions with proactive, personal and dynamic insight. 

A recent IDC report also provides more information regarding AI-driven systems management.

Our product management teams looks at the requirements for AI Ops from a practical perspective. We try to offer functionality that will provide actionable insights and automation to simplify and enhance day to day operations. Something that we ask ourselves everyday is, how can Intersight make daily routines easier and faster across operations groups and teams? In order to achieve these objectives, we believe AI Ops needs to provide the following:

◈ Open automation to streamline routine processes
◈ Consistent management services across a wide variety of infrastructure
◈ Ability to proactively recognize problems and assist users to respond quickly

Open and Unified AI Ops


If your going to make it easier to work across teams, you have to provide an open framework for AI Ops. One of our competitors claims to deliver “AI for the data center”. However, the vendor’s platform currently supports only their storage products. Their strategy is vendor focused. When you read about their roadmap, the vendor only plans to support their systems. Customers use systems from a variety of vendors, so their marketing does not align with reality.

An open and unified framework is foundational. That’s why we designed Cisco Intersight to provide an open framework for AI Ops and infrastructure as a service. Intersight supports Cisco UCS and HyperFlex systems today, and we are working with partners to provide support for third party systems.

One of the reasons many IT processes are manual is because tools used by different teams have separate data stores and user interfaces. When you can aggregate data from a wide range of servers, fabric, storage and hyperconverged infrastructure, you have a common repository that is necessary for effective analysis. There are currently hundreds of thousands of devices connected to Intersight. The vast amount of data we are collecting is used with AI and ML algorithms to identify potential problems and provide users with actionable intelligence. We have integrated Intersight with Cisco Technical Assistance Center (TAC) cognitive support, so we have can leverage their AI and ML capabilities as well as best practices.

Artificial Intelligence, Cisco Hyperflex, Cisco UCS, Cisco Intersight, Machine Learning, API

The Four Benefits of AI Operations


As we continue to enhance Intersight and the Cisco management portfolio, our primary goal is to increase the customer benefits we can deliver through AI Ops. We have defined four categories of benefits:

1. Improved user experience
2. Proactive support and maintenance
3. Predictive operational analytics
4. Self-optimization of resources

Cisco is executing on our strategy to consistently enhance the customer benefits we deliver through AI Ops. We will be posting a series of blogs to explain how we are currently delivering the benefits in each of the four categories and our plans for the future.

Sunday 23 September 2018

Improve Office 365 Connectivity with Cisco SD-WAN

As more applications move to the cloud, the traditional approach of backhauling traffic over expensive WAN circuits to the data center or a centralized Internet gateway via a hub-and-spoke architecture is no longer relevant. Traditional WAN infrastructure was not designed for accessing applications in the cloud. It is expensive and introduces unnecessary latency that degrades the user experience. The scale-up effect of the centralized network egress model coupled with perimeter stacks optimized to handle conventional Internet browsing often pose bottlenecks and capacity ceilings, which can hinder or bring to a stall customer transition to the SaaS cloud.

Cisco SD-WAN, Cisco Learning, Cisco Tutorial and Material, Cisco Study Materials

As enterprises aggressively adopt SaaS applications such as Office 365, the legacy network architecture poses major problems related to complexity and user experience. In many cases, network administrators have minimal visibility into the network performance characteristics between the end user and software-as-a-service (SaaS) applications. ‘One size fits all’ approach focusing on perimeter security without application awareness, which legacy network architectures often have, do not allow enterprises to differentiate and optimize sanctioned and more trusted cloud business applications from recreational Internet use, resulting the former to be subject to expensive and intrusive security scanning further slowing down user experience.

Massive transformations are occurring in enterprise networking as network architects are reevaluating the design of their WANs to support a cloud transition, reduce network costs, increase visibility and manageability of their cloud traffic, while ensuring an excellent user experience. These architects are turning to software-defined WAN (SD-WAN) to take advantage of inexpensive broadband Internet services and to find ways to intelligently route trusted SaaS cloud bound traffic directly from remote branches. Cisco SD-WAN fabric is an industry-leading platform that delivers an elegant and simplified secure, end-to-end hybrid WAN solution that can facilitate policy based, local and direct connectivity from users to your trusted, mission critical SaaS applications, such as Office 365, straight from your branch office. Enterprises can use this fabric to build large-scale SD-WAN networks that have advanced routing, segmentation, and security capabilities with zero-touch bring-up, centralized orchestration, visibility and policy control. The result is a SaaS cloud-ready network that is easy to manage and more cost-efficient to operationalize and that empowers enterprises to deliver on their business objectives.

A fundamental tenet of the Cisco SD-WAN fabric is connecting users at the branch to applications in the cloud in a seamless, secure, and reliable fashion. Cisco delivers this comprehensive capability for SaaS applications with the Cloud onRamp for SaaS solution in alignment with Microsoft’s connectivity principles for Office 365.

With Cloud OnRamp for SaaS, the SD-WAN fabric continuously measures the performance of a designated SaaS application through all permissible paths from a branch and assign a score. This score gives network administrators visibility into application performance that has never before been available. Most importantly, the fabric automatically makes real-time decisions to choose the best-performing path between the end users at a remote branch and the cloud SaaS application. Enterprises have the flexibility to deploy this capability in multiple ways, according to their business needs and security requirements.

In some deployments, enterprises connect remote branches to the SD-WAN fabric using inexpensive broadband Internet circuits, and they want to apply differentiated security policies depending on the type of services users are connecting to.  For example, instead of sending all branch traffic to a secure web gateway (SWG) or cloud access security broker (CASB), an enterprise may wish to enforce their IT security policies in a targeted manner – by routing regular Internet traffic through SWG, while allowing performance optimal direct connectivity for a limited set of sanctioned and trusted SaaS applications, such as Office 365. In such scenarios, Cloud onRamp for SaaS can be set up to dynamically choose the optimal path among multiple ISPs for both applications permitted to go directly and for applications routable per enterprise policy through SWG.

Cisco SD-WAN, Cisco Learning, Cisco Tutorial and Material, Cisco Study Materials

To learn more about Cloud onRamp for Office 365, read our white paper. For more information about Cisco SD-WAN, click here.

If you’re attending Microsoft Ignite in Orlando next week, make sure to visit Cisco at booth #418. I’d love to show you how to improve your Office 365 connectivity and user experience using Cisco SD-WAN.