Friday, 19 October 2018

Machine Learning is NOT Rocket Science: Part 1

Movies have always created powerful mystique about artificial intelligence. For example, 2001: A Space Odyssey had the computer, Hal 9000, that recognized astronauts, spoke to them, and even locked the door to prevent an astronaut from entering the spacecraft. In the Terminator movies, Skynet was a self-aware computer set on destroying humans. The awesome computer capabilities depicted in these and other movies are very entertaining to be sure but also create a mysticism about computers being omniscient, omnipresent, and even omnipotent. Parts of these fictional computer superpowers are actually reality in our pockets. For many of us, the smart phone is able to recognize our voices, our faces, talk to us in different languages, and even lock doors. Despite how artificial intelligence and machine learning are embedded into our lives, the mystical powers of fictional computers still give many the impression that using artificial intelligence or machine learning capabilities in business requires the wizardry of Merlin, the intellect of Einstein, and the national effort of a moon landing.

Machine Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

The reality is that machine learning has advanced to a point where it is no longer in the realm of rocket science. To take advantage of machine learning today, one does NOT need know all the internal details of a machine learning algorithm such as

Auto-differentiation
Stochastic Gradient Descent
Kernel tricks of a support vector machine

One only has to be able to use software packages such as scikit-learn, TensorFlow, PyTorch, and many others. Rather than the mysticism of rocket science, the true technical barrier for entry to machine learning has been lowered to that of a software problem. Does having a data scientist that understand the machine learning algorithmic details help? Absolutely. However, data scientists do not need to know all the details of the machine learning algorithm to mine value out of data. This situation is very similar to that of a C programmer who may not understand the details of assembly language but can still develop sophisticated programs.

Is machine learning different than traditional programming? Yes. Historically, humans had to create software to take input data and have the computer to generate output data. For example, a programmer can try to write code that recognize photos of cats and dogs by describing all the characteristics of of cats and dogs (e.g., nose, ears, tails). Unfortunately, this is an exceptionally daunting problem because of the myriad variations among cats, dogs and their respective breeds. Instead of writing such detailed instructions to recognize a pet, with supervised machine learning, you feed the machine learning algorithm with lots of labeled examples, such as photos that are properly identified as cats and dogs. Then, the machine learning algorithm can create a program, also known as a model, that can recognize cats and dogs with amazing accuracy. With this ability to recognize patterns in data, machine learning can be used in a variety of tasks, not just academic examples such as dog recognition. The once complex pattern recognition problem has become as simple as managing the labeled data and using the machine learning algorithms.

Machine Learning, Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides
AI / ML Write Code Based on Examples

Wednesday, 17 October 2018

Miercom Tests Endorse Cisco 1000 Series ISRs’ IPsec Encryption Performance

In both traditional and future SD-WAN network architectures, IPsec encryption performance is one of the most important technologies for secure delivery of customer traffic in branch routers. Higher IPsec throughput performance can also translate into improved customer experience and even revenue.

Miercom recently validated a few models of Cisco and Huawei fixed branch routers, measuring RFC 2544 IPsec encryption throughput performance. The testing shows that the Cisco 1111 Integrated Services Router (ISR) demonstrated the highest average IPsec throughput performance of 365 Mbps, compared to Huawei and HPE fixed branch routers. The Huawei AR1220E shows only 245 Mbps. The result is the average of 20 test results, so it is very reliable.

Table 1 shows the overall throughput performance comparison chart from the Miercom report.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 1. Competitive WAN performance

Let’s look at the result variation among the 20 test runs. See Table 2.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 2. WAN performance variation

The Huawei AR1220E fixed router shows the largest throughput variations. In other words, Huawei fixed router throughput performance is not the same when measured at different times under the same setup conditions and environments. To customers, this could mean very inconsistent throughput due to complex processing of I/O, buffering, table lookup, queuing, and forwarding sessions. For a service provider, this could result in poor customer satisfaction.

If we look at the overall test result variations reported by Miercom, the two Cisco fixed ISRs, the 1117 and 1111, have the lowest variations in IPsec throughput results, while the three Huawei fixed routers, the AR1220E, AR169FGW-L, and AR201, show the highest variations. See Table 3. To customers, this means that if you pick Cisco fixed routers as your branch router for WAN services, you will get better and more consistent IPsec throughput performance, while if you pick Huawei fixed routers, the service may be very inconsistent.

Cisco Tutorial and Material, Cisco Learning, Cisco Study Material, Cisco Guides

Table 3. Competitive WAN performance variability

For the full details, download the comprehensive Miercom report and accompanying test results.

Sunday, 14 October 2018

Building the 5G Business Case

2018 to date has been the year when 5G came out of the standards and into reality, with many trials throughout Asia Pacific (APAC). The learnings from these trials have shown us not only what services could be supported, but how 5G should be deployed and what the likely investments will be required for a commercial launch. During the recent 5G Asia event, the 5G discussions have moved from how and when 5G will be deployed, to how are we going to pay for it. So what will a 5G investment business case look like?

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

The way I see it, there are three broad areas of focus for the initial 5G business case, considering both top-line and bottom-line drivers:

1. First, the initial focus area is the economics of meeting the current projected data capacity growth requirements at a lower cost per bit. 3G/4G traffic is still growing at more than 100% year-over-year in some APAC markets and even jumped over 400% in India last year. The amount of capacity that 5G will enable of course depends primarily on the amount of new spectrum that regulators make available, but looking at allocations to date, we expect 5G to open up around 5x more bandwidth than 4G has today. Based on our modeling, 5G cost per bit could be less than half of what it is for 4G, and a quarter of 3G costs. This will drive traffic migration and spectrum re-farming initiatives once 5G is launched.

2. Second, 5G enables more customized services, or so-called slices. Most 4G services today are supported over the same generic “bearer” regardless of the requirements or value of that service. With 5G networks, attributes like bandwidth (BW), latency, resiliency and security can be customized, like per over-the-top (OTT), enterprise or Internet of Things (IoT) application, etc. The business case drivers here are both increased revenue share with new and differentiated services, and further improved cost to serve.

3. Third is the monetization of completely new services beyond what 4G can support today. This is where the new cool applications come in, like lower latency, Augmented Reality (AR)/Virtual Reality (VR), tactile internet, super high-definition media with Gbps bandwidth, and massively dense machine-to-machine (M2M) deployments. In this area, we have the most uncertainty for Return on Investment (RoI). Traditionally, the mobile industry hasn’t been great at predicting the next “killer” application, but we do know 5G is a step change in mobile capabilities and with the right device and application ecosystem, the next killer app will come.

Cisco Tutorial and Material, Cisco Certification, Cisco Learning, Cisco Study Materials

As we move from trial, to deployment, to launch of 5G services, the business model will continue to evolve, but at least for now, we have an initial view of how 5G will benefit service providers. Investment business cases based on lower cost per bit capacity and service differentiation are enough to show positive RoI, and monetizing the next 5G killer app will be the future icing on the cake.

Empowering Defenders: AMP Unity and Cisco Threat Response

Defenders have a lot of work to do, and many challenges to overcome. While conducting the Cisco 2018 Security Capabilities Benchmark Study, where we touched more than 3600 customers across 26 countries, these assumptions were confirmed. We have seen that defenders are struggling with the orchestration of a mix of security products and that, by itself, may obfuscate rather than clarify the security landscape.

Let’s take a moment to imagine a security team and the tasks it performs daily. Reviewing increasing numbers of alerts, attempting to correlate information from various sources to build a complete picture of each potential threat, triaging and assigning priorities, are all complex tasks performed under time pressures. The goal is to quickly come up with an adequate response strategy based on the clear understanding of the threat, its scope of compromise, and the potential damage it could cause. This process is often error-prone and time-consuming when it is manual. At the same time when understanding the alerts becomes a challenge, high severity threats can slip through the defenses.

We have heard from the majority of customers that an integrated approach is easier to implement and is more cost effective. Listening to and understanding the needs of our customers has always been a priority for us. Therefore, to empower security analysts with effective weapons to defend their organizations, Cisco has built a security architecture that helps streamline security operations. Most recently we have developed two offerings: one a platform and the other a capability: Cisco Threat Response and AMP Unity. Both are exciting developments and while they are different, they serve the same strategic goal.

AMP Unity


AMP Unity is a capability that allows organizations to register their AMP-enabled devices (Cisco NGFW, NGIPS, ESA, CES, WSA with a Malware/AMP subscription) in the AMP for Endpoints Console. In this way, those devices can be seen and queried (for sample observations) the same way the AMP for Endpoints Console already provides for endpoints. This integration allows correlating file propagation data across all of the threat vectors in a single User Interface (Global File Trajectory view).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global File Trajectory view (showcasing file transfer through an email gateway, down to the endpoint, across the network to another endpoint)

But it doesn’t stop there. AMP Unity also allows you to create common file whitelists and file blacklists (through the same AMP for Endpoints Console) and enforce them across all of the registered AMP-enabled devices in the organization alongside your AMP endpoints (Global Outbreak Control).

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Global Outbreak Control (adding a file to a Simple Detection list which enforces a blocking action across all AMP-enabled devices and endpoints)

In an incident response scenario, being able to quickly understand the scope of compromise and the way threats propagate across the environment, is essential. Being able to enforce policy across the malware inspection gateways and endpoints consistently helps security teams save time and address threats that matter.

Keep in mind that AMP Unity is a capability. It doesn’t introduce new dashboards or policies – it’s all managed through the AMP for Endpoints Console. That helps you derive more value out of your existing AMP investments.

Cisco Threat Response


Cisco Threat Response is an innovative platform that brings together security-related information from Cisco and third-party sources into a single, intuitive investigation and response console. It does so through a modular design that serves as an integration framework for event logs and threat intelligence. Modules allow for the rapid correlation of data by building relationship graphs that in turn, enable security teams to obtain a clear view of the attack, as well as to quickly make effective response actions.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Relationship Graph

As of the time of publishing this blog, Cisco Threat Response brings together event logs and threat intelligence from multiple Cisco and 3rd party modules. It’s likely that by the team you read this blog, the platform has added additional modules and capabilities.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Cisco Threat Response Modules

The obvious value here is automation and the reduction of incident response lag caused by shifting through multiple user interfaces and attempting to correlate available data manually. That’s precisely what Threat Response does for you. The daily workflow is also streamlined through the integrated case management tool named “Casebook”. That is a tiny UI component that allows you to gather and pivot on observables, assign names to your investigations, take notes and much more. Casebooks are built on a cloud API and data storage, and can be referenced by any product (with your credentials). Because of this, they can follow you from product to product, eventually across the entire Cisco Security portfolio.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
Casebook

Cisco Threat Response is currently available to AMP for Endpoints and Threat Grid customers, who can take advantage of this powerful platform and the possibilities it provides today.

Tying AMP Unity and Cisco Threat Response Together


Considering both of these developments provide added value to security teams through tighter native integrations, how do they relate to each other? Simple – Cisco Threat Response queries correlated event telemetry from AMP for Endpoints and allows you to quickly take containment actions. It does so through the AMP for Endpoints API, via the AMP for Endpoints module enabled in Threat Response. Since AMP for Endpoints Console is a central place to correlate telemetry from AMP-enabled devices, this information can be used to enrich relationship graphs built by Threat Response. On top of that, Global Outbreak Control capabilities introduced by AMP Unity can be used through the Threat Response User Interface.

Cisco Study Materials, Cisco Guides, Cisco Learning, Cisco Tutorial and Material
AMP Unity Events in Threat Response

AMP Unity brings your AMP-enabled device data to Threat Response via the AMP for Endpoints module, and in turn Threat Response allows you to quickly take action at both the endpoint and edge layers of your AMP deployment based on investigation results across all Threat Response data.

As Cisco continues to develop new modules for Threat Response, enabling AMP Unity will be an optional step to correlate event telemetry from AMP-enabled devices. Eventually Threat Response will be able to query these devices (WSA, ESA, CES, NGFW, NGIPS) directly without having to rely on the AMP for Endpoints module (which is especially important for customers who do not have AMP for Endpoints).

Friday, 12 October 2018

Meraki Wireless Health APIs Make Network Assurance Easier

As Meraki continues to drive cloud managed networking into new markets, we continue to evolve our offerings to help customers and partners adopt this mission. With large enterprise, campuses, and service providers all rapidly growing Meraki wireless deployments, Meraki continues to rapidly evolve to drive innovation in these markets. As part of the strategy to make our rich data sources available to our customers, we introduced Meraki Wireless Health and a brand new product line Meraki Insight at Cisco Live in Barcelona 2018.

In addition to our rapid adoption of Meraki Wireless Health, Meraki’s API platform has also been experiencing rapid adoption by our customer and partner community. Since the introduction of APIs less than 2 years ago, the API platform has grown into 100’s of unique APIs and over 20 million API calls a day

Access all Meraki Wireless Health features via an API interface


Wireless Health has been an instant hit with hundreds of thousands of customers now actively using Meraki Wireless Health, we have maintained our focus on wireless health to build out additional value for our customers. To build on top of all of these successes, Meraki is proud to announce that we are now launching full Wireless Health API endpoints. This is one of our largest API launches to date, and all of our customers now have access. These new API endpoints will make it easy for both Meraki’s Partner Solutions team and Cisco’s DevNet team to drive simple Open Source and Partner Solutions that can help simplify the management of wireless deployments of all types and for all verticals.

With this launch we are creating 3 key types of API endpoints


1. Connection Health – Summarizes the connection health of a network, AP or client
2. Connection Failures – Returns a full list of association, authentication, DHCP and DNS issues
3. Network Latency – Summarizes the latency of a network, AP or client.

The great thing about these new API endpoints is we have designed them to be flexible. You can filter all of them by a specific VLAN or SSID. You can also call the summary statistics by using start time and end times.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Leverage our DevNet community to build your own customized analysis or visualization

Using the new APIs to create working solutions


During the development of the Wireless Health API endpoints, Meraki worked with DevNet to validate the format of the API endpoints and make them as robust as possible. Our teams also worked together to create a real working solution using the new API endpoints. With the experienced DevNet team working with the Meraki team, we were able to put together a full working demonstration of the new API endpoints within one week. Now that Meraki APIs are part of Cisco DevNet we now have access to over 500,000 DevNet developers who can create services and solutions based on the Meraki APIs.

Wireless infrastructure supports mobile POS devices


Together we created a solution that allows one of our retail customers to correlate how network health, customer foot traffic, and point of sale statistics all interact – and do it in a single dashboard for over 50 separate locations! With mobile point of sale (POS) devices becoming a predominant method of in-store payments, the wireless infrastructure has become more important than ever. So we wanted to create a dashboard where our customers could quickly check the overall current health of the retail store, but also (thanks to some open source engineering) see the historical health. We also worked with the customer to create an overall health score for their retail stores, so that they can roll up the disparate and complex data set into a single scoring rank. This unleashes an incredibly powerful data point, where the retailer can quickly identify poorly performing retail locations and within seconds dive in and investigate the root cause.

Cisco Tutorial and Material, Cisco Guides, Cisco Learning, Cisco Network, Cisco Study Materials

Complete visibility into the retail locations data using Kibana

Unleash the power of your data in the Meraki platform


This is just the beginning of Meraki unleashing the power of the data in our platform. In the months ahead, we will continue to release more network health metrics that will further streamline the running of enterprise networks. We are looking forward to all the great things our customers and partners are going to create with these newly introduced API endpoints, and our team will continue to leverage our agile development environment to drive more innovations in the networking space.

Since 2017 the Cisco Meraki team has been working with the Cisco DevNet team. As a result, over the last year a large number of partners have leveraged our API infrastructure to create solutions specific to unique use cases. We are constantly adding new solutions and partners to our ecosystem, and they are easily available for anyone to view.

Wednesday, 10 October 2018

Challenge Your Inner Hybrid Creativity with Cisco and Google Cloud

In recent years, Kubernetes has risen up in popularity, especially with the developer community. And why do developers love Kubernetes? Because it offers incredible potential for speed, consistency, and flexibility for managing containers. But containers are not all sunshine and roses for enterprises – with big benefits come some big challenges. Nobody loves deploying, monitoring, and managing container lifecycles, especially across multiple public and private clouds. On top of that, there are many choices when it comes to environments, which can also create a lot of complexity – there are simply too many tools and too little standardization.

Production grade container environments powered by Kubernetes


That’s why earlier this year Cisco launched the Cisco Container Platform, a turnkey-solution for production grade container environments powered by Kubernetes. The Cisco Container Platform automates the repetitive functions and simplifies the complex ones so everyone can go back to enjoying the magic of containers. The Cisco Container Platform is a key element of Cisco’s overall container strategy and another way Cisco provides our customers with choices to various public clouds.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 1: Cisco Hybrid Cloud for Google Cloud

Hybrid cloud applications are the next big thing for developers


At the beginning of the year Cisco joined forces with Google Cloud on a hybrid cloud offering that, among other things, allows enterprises to deploy Kubernetes-based containers on-premises and securely connect with Google Cloud Platform.

In July at Google Cloud Next ’18, we kicked off the Cisco & Google Cloud Challenge.  (You still have until November 1, 2018 to enter the challenge and win prizes.) The idea behind it is to give developers a window into the possibilities for building hybrid cloud applications. Hybrid cloud applications are the next frontier for developers. There are so many innovation possibilities for the hybrid cloud infrastructure. That’s why we even named it “Two Clouds, infinite possibilities.”

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 2: Timeline for the Cisco & Google Cloud Challenge

An IoT edge use case for inspiration


Consider the following use case –assume we have a factory which generates a huge amount of data from sensors deployed across the physical building. We would like to analyze that data on-premises, but take advantage of cloud services in Google Cloud Platform for further analysis. This could include running predictive analysis with Machine Learning (ML) on that data (i.e., which machine part is going to break next). “Edge” here represents a generic class of use cases with these characteristics:

◈ Limited Network Bandwidth – Many manufacturing environments are remote, with limited bandwidth. Collecting data from hundreds of thousands of devices requires processing, buffering, and storage at the edge when bandwidth is limited. For instance, an offshore oil rig collects more than 50,000 data points per second, but less than 1% of this can be used in business decision making due to bandwidth constraints. Instead, analytics and logic can be applied at the edge, and summary decisions rolled up to the cloud.

◈ Data Separation & Partitioning – Often data from a single source needs to go to different and/or multiple locations or cloud services for analytics processing. Parsing the data at the edge to identify its final destination based on the desired analytics outcome allows you to route data more effectively, lower cloud costs and management overhead, and provide for the ability to route data based on compliance or data sovereignty needs. For example sending PCI, PII, or GDPR classified data to one cloud or service, while device or telemetry data routes to others. Additionally, data pre-processing can occur at the edge to munge data such as time series formats into aggregate, reducing complexity in the cloud.

◈ Data Filtering – Most data just isn’t interesting. But you don’t know that until you’ve received it at a cloud service and decide to drop it on the floor. For example, fire alarms send the most boring data 99.999% of the time. Until they send data that is incredibly important! There is often no need to store or forward this data until it is relevant to your business. Additionally, many data scientists now desire to run individually trained models at the edge, and if data no longer fits that model or is an exception, to send the entire data set to the cloud for re-training. Filtering with complex models also allows intelligent filtering at the edge that support edge decision making.

◈ Edge Decision Making & Model Training – Training and storing ML models directly at the edge allows storing ephemeral models that may otherwise not be possible due to compliance or data sovereignty requirements. These models can act on ephemeral data that is not stored or forwarded, but still garner information and outcomes that can then be sent to centralized locations. Alternatively, models can be trained centrally in the cloud and pushed to the edge to perform any of the other listed edge functions. And when data no longer fits that model (such as collecting long tail time-series data) the entire data set can be aggregated to the cloud for retraining, and the model re-deployed to the edge endpoints.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 3: Hybrid Cloud, Edge Compute Use-case

As a real-life example, here in Cisco DevNet, we developed a use-case for doing Object Recognition using video streams from IP cameras. The video gateway at the edge analyzed the video streams in real-time, did object detection at the edge and passed the object to the Cisco Container Platform which further did object recognition. The recognized object, and all the associated meta-data, were stored at this layer. An application to query this data was written in the public cloud to track the path of the object.

Give the Cisco & Google Cloud Challenge a try


There’s no doubt about the popularity of Kubernetes in the developer community. Cisco Hybrid Cloud Platform for Google Cloud takes away the complexity of managing private clusters and lets developers concentrate on the things they want to innovate on. Start with our DevNet Sandbox for CCP, reserve your instance and test-drive it for yourself.

The Cisco & Google Cloud Challenge is an awesome way to brainstorm and solve some real customer problems and even win some prizes while you are at it. So, consider this blog as me inviting everyone to give the Challenge a try, and wishing you the very best! You have until Nov 1, 2018 to enter the challenge and win prizes.

Saturday, 6 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 2

In part 1 of this blog series I talked about how data processing landscapes are getting more complex and heterogeneous creating roadblocks for customer who want to adopt truly hybrid cloud data applications. In the beginning of this year, Cisco and SAP decided to join forces and to bring the SAP Data Hub to the Cisco Container Platform. The goal is to provide a real end-to-end solution to help customers tackle the challenges described above and enable them to become a successful intelligent enterprise. We are focusing on providing a turn-key enterprise-scale solution that fosters a seamless interplay of powerful hardware and sophisticated software.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 1 Unified data integration and orchestration for enterprise data landscapes.

SAP brings into the game its novel data orchestration and refinery solution ‘SAP Data Hub’. The solution brings a number of features that allow customers to manage and process data in complex data landscapes involving on-premise systems and across multiple clouds. SAP Data Hub supports connecting the different systems in a landscape to a central hub to gain a first overview of all systems involved in data processing within a company. Above that the Data Hub is able to scan, profile and crawl those sources to retrieve the metadata and characteristics of the data stored in those sources. With that the SAP Data Hub provides a holistic data landscape overview in a central catalog and allows companies to answer the central questions about data positioning and governance.

Furthermore, the SAP Data Hub allows the definition of data pipelines that allow a data processing and landscape orchestration across all connected systems. Data pipelines consist of operators—small independent computation units—that form a joint computation graph. The functionality an operator provides can reach from very simple read operations and transformations (e.g. change the date format from US to EU), over interacting with a connected system, towards invoking a complex machine learning model. The operators are invoking their functionality and applying their transformations, while the data flows through the defined pipeline. This kind of data processing changes the paradigm of static, transactional ETL processes to a more dynamic flow-based data processing model.

With all of this functionality, we kept in mind that for being successful in bridging enterprise data and big data, we need to be open with respect to connecting not only SAP enterprise systems, but common systems used within the Big Data space (compare Figure 2). For this purpose, the SAP Data Hub is focusing on an open connectivity paradigm providing a huge number of connectors to different kinds of cloud and on-premise data management systems fostering the integration between enterprise data and big data.

All of that makes the SAP Data Hub a powerful enterprise application that allows customer to orchestrate and manage their complex system landscape. However, a solution like the Data Hub would be nothing without a powerful and flexible platform. Customers are increasingly turning towards containerized applications and Kubernetes as the orchestrator of choice, to handle the requirements to efficiently process large volumes of data. For this reason, it was a clear decision to move the SAP Data Hub also in this direction. The SAP Data Hub is completely containerized and uses Kubernetes as its platform and foundation.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 2 SAP & Cisco delivering turn-key solutions for complex enterprise data landscapes.

This is where Cisco with its advanced Cisco Container Platform (CCP) on its hyperconverged hardware solution Cisco Hyperflex comes into the game. Providing elastically scalable container clusters as a single turnkey solution covering on-premise and cloud environments with a single infrastructure stack is key for enterprise customer involved in big data analytics. With the Cisco Container Platform on Hyperflex 3.0 Cisco offers a fully integrated and flexible ‘container as a service’ offering with lifecycle management for hardware and software components. It provides a 100% upstream Kubernetes with integrated networking, system management and security. In addition, it utilizes modern technologies such as ‘istio’ and ‘Cloud Connect VPN’ to efficiently bridge on-premise and cloud services from different cloud providers. Accordingly, it accelerates a cloud-native transformation and application delivery in hybrid cloud enterprise environments, clearly embracing the multi-cloud world and helping to solve the multi-cloud challenges. Furthermore, the CCP allows to monitor the entire hardware and Kubernetes platform to allow customers to identify issues and non-beneficial usage patterns pro-actively and troubleshoot container clusters with fast pace.

Accordingly, the CCP is the perfect foundation for deploying the SAP Data Hub in complex, multi-cloud and hybrid cloud customer landscapes. We complemented the solution with Scality Ring an enterprise-ready scale-out file and object storage that fulfills major characteristics for production-ready usage; e.g. guaranteed reliability, availability and durability. This adds a data lake to the on-premise solution allowing price-efficient storage for mass data. In addition, we added network traffic load balancing with the advanced AVI Networks load balancers. They provide intelligent automation and monitoring for improved routing decisions. Both additions greatly benefit the CCP and complete it towards a full big data management and processing foundation.

With the release of the SAP Data Hub on the Cisco Container Platform running on Hyperflex 3.0 and complemented with Scality Ring and AVI Networks load balancers during SAP TechEd Las Vegas, customers will have the option to receive a turn-key, full-stack solution to tackle the challenges of modern enterprise data landscapes. They can start fast, they remain flexible and they receive full-stack support from Cisco’s world class engineering support and SAP’s advanced support services. Accordingly, SAP and Cisco together enable customers to win the race for the best data processing in the digital economy.