Showing posts with label Cisco Container Platform. Show all posts
Showing posts with label Cisco Container Platform. Show all posts

Sunday, 26 January 2020

An Update on the Evolving Cisco and SAP Strategic Partnership

As Cisco’s SAP ambassador, I’m often asked, “Tell me about the Cisco and SAP partnership.” Many may not know, but in 2019 we celebrated twenty years of Cisco and SAP working strategically together—always with the objective of benefiting our mutual customers. Innovation has been an intense focus for the partnership, which is why, for example, Cisco became a founding sponsor of the SAP co-innovation lab in 2014.

Cisco Partners, Cisco DNA Center, Cisco Container Platform, Cisco Prep, Cisco Study Material

Today, the Cisco and SAP partnership touches many business units at Cisco; what began with optimizing Cisco Data Center products to run SAP software has evolved to include other strategic areas such as Internet of Things (IoT), cloud computing, big data processing, AI/ML, and collaboration.

SAP Data Hub on Cisco Container Platform


As an example of software co-innovation, Cisco Container Platform (CCP) is certified for the SAP Data Hub and includes support for use cases such as hybrid cloud big data processing. Many SAP Data Hub customers want to run in hybrid cloud environments to leverage cloud-based services, while also keeping some data on premises to meet security and governance requirements.

SAP Data Hub is SAP’s first micro services container-based application, and it enables users to orchestrate, aggregate, visualize, and generate insights from across their entire data landscape. SAP Data Hub runs anywhere Kubernetes runs.

Unfortunately, running Kubernetes on premises has its challenges. For instance, IT must  answer questions about how to manage and support Kubernetes. In addition, it’s challenging to connect the private and public cloud environments and complicated to manage user access and authorizations across multiple environments.

The integration of SAP Data Hub with CCP addresses these challenges. CCP is a production-ready Kubernetes container management platform based on 100 percent upstream Kubernetes and delivered with Cisco enterprise-class Technical Assistance Center (TAC) support. It reduces the complexity of configuring, deploying, securing, scaling, and managing containers via automation. CCP works across on-premises and public cloud environments.

The Cisco and SAP teams are working closely to bring the next iteration  of SAP’s multicloud strategy for on-premises deployments—SAP Data Intelligence, which marries SAP Data Hub to AI/ML—to fruition.

AppDynamics monitors SAP environments


Cisco Partners, Cisco DNA Center, Cisco Container Platform, Cisco Prep, Cisco Study Material
Cisco has enhanced AppDynamics, its application performance monitoring product, to monitor SAP environments. This engineering effort includes giving AppDynamics code- level visibility into SAP ABAP, which is the primary programming language for SAP applications.

This new capability provides direct hooks that enable AppDynamics to measure the business process performance of SAP applications. And though SAP has its own monitoring solution, AppDynamics enables SAP customers to monitor their business processes across SAP and non-SAP solutions.

Monitoring is of special importance to SAP customers because their systems often consist of SAP and non-SAP components. For example, at a minimum, an online retail e-commerce system likely consists of a web server connected to an SAP ERP system, and slow checkout can potentially drive customers away. Unfortunately, it is time-consuming and difficult for engineering teams to diagnose where in the stack a performance issue is occurring.

Cisco DNA spaces


Everyone is talking about IoT and digital transformation. However, a big challenge in deploying an IoT strategy is the need to put sensors everywhere, which represents a huge investment of capital, time and resources.

As a leading network provider, Cisco can help customers meet this challenge, because,  in many cases, a wireless network is already in place. A wireless access point not only acts as a transmission device, but it can also sense things if enabled with Cisco DNA Spaces. For instance, an access point can track how many mobile phones are connected, for how long, and where they are located at all points in time. By combining geo-location information with enterprise data, businesses get closer to achieving the IoT promise of utilizing data from things to ultimately make better decisions.

Consider this scenario: the owner/operator of a shopping mall wants to know not only quantity of traffic but also where visitors to the mall go. By combining this data with SAP ERP data such as lease fees and analyzing it, the owner/operator can decide upon fair lease prices for shops located in lower- versus higher-traffic areas.

Through Cisco and SAP co-engineering, the rich on-location people and things data provided by Cisco DNA Spaces is now integrated with SAP software, enabling our mutual customers to gain additional insights into what’s happening in their businesses.

Cisco Data Center solutions for SAP


Finally, Cisco UCS-based converged infrastructure solutions—which were launched over a decade ago—are at the heart of the infrastructure running many SAP workloads today. These solutions blend secure connectivity, programmable computing, multicloud orchestration, and cloud-based management with operational analytics for our customers’ SAP data centers.

We continue to innovate around these data center solutions to support evolving use cases such as providing support for machine learning applications. Cisco Data Center solutions, for example, have now integrated NVIDIA GPUs and are certified to support Intel® Optane, which enables persistent memory, larger memory pools, faster caching, and faster storage.

The next twenty years …


As Cisco’s SAP ambassador, I’ve seen over and over again how Cisco and SAP’s portfolios complement each other. For example, a key SAP mission is to help its customers become intelligent enterprises, which requires robust connectivity at all customer touchpoints. This mission, of course, meshes with Cisco’s core competency as the world’s leading network provider.

As we continue to innovate, Cisco and SAP will continue our laser focus on co-engineering innovations that deliver the value our mutual customers require in their evolving business environments.

Wednesday, 22 January 2020

Artificial Intelligence Translational Services Use Cases in Cisco Contact Centers

Artificial Intelligence and Translational Services


Cisco Collaboration, Cisco Tutorial and Material, Cisco Online Exam, Cisco Prep, Cisco Certification

The world is flattening; thus, with the business becoming increasingly global, the existing language barriers demand new solutions across vertical markets, especially when dealing with a company to a consumer.

Europe, with its 24 official different languages, is certainly posing some extra challenges to those companies delivering services across countries part of the union, and that’s nothing considering that there are more than 200 languages spoken on the continent.

The language barrier is undoubtedly and historically adding complexity to international business, and this is especially true when we consider Contact Centers and the high-quality customer experience they have to deliver in business-to-consumer services. While in business to business, there is a de facto international language, which is English. If there are consumers involved, that’s no longer an option —companies have to deal with the many languages spoken across countries.

Narrowing the Call Center Gap


With new generations of new consumers speaking their mother tongue language when calling a contact center, the translation problem will not disappear anytime soon. We should even expect the problem to become further challenging because of the increasing immigration of people. In 2017 2.4 million immigrants entered the EU from non-EU countries, and a total of 22.3 million people (4.4 %) of the 512.4 million people living in the EU on 1 January 2018 were non-EU citizens*

While these new immigrants will learn the local languages, they need to access services, especially public services, and this is quite a challenge, in particular for public administrations. In theory, a Contact Center could afford these challenges employing multilanguage agents or more agents. Still, it’s rather clear that this is far to be an optimal solution, and the associated costs are not negligible.

Apart from that, we are not talking about supporting two or three different languages, but rather a multitude of idioms; to depict the complexity of such a model, consider the challenges that this poses to a European contact center service in terms of WorkForce Management and Optimizations. When delivering a satisfied Customer Experience, it’s no longer just a matter of the number of agents we need each hour of the day, of the week, and the week of the month, but rather how many different languages they can speak — an authentic nightmare.

Cisco Collaboration, Cisco Tutorial and Material, Cisco Online Exam, Cisco Prep, Cisco Certification

The Growing Need for Multilanguage Agents


It is also happening quite often that the multilanguage agents might be good at speaking two or three languages but not necessary at writing those. Therefore, the challenge is even higher for Digital Contact Centers.

Recent advances in speech technology and Natural Language Understanding (NLU) have the potential to transform today’s challenges into new opportunities. Artificial Intelligence, integrated into Cisco Cognitive Contact Centers, could deliver an excellent solution to business problems like those described above. For example, a digital Cisco Cognitive Contact Centers could leverage Google AI DialogFlow capabilities to provide a Chat Translation Assistance Service, literally able to remove the language complexity and costs from the “Contact Center Work Force Optimization equation.” Let’s see how in the following proof of concept example:

Watch the Video:


This is the logical architecture used in the video.

Cisco Collaboration, Cisco Tutorial and Material, Cisco Online Exam, Cisco Prep, Cisco Certification

Another use case we may want to consider as a proof of concept is when traditional audio-only contact centers are located in a country abroad, where the cost of labor is cheaper. There are agents able to speak the required language even if they aren’t mother tongue. For example, this is the case for North African French-speaking contact centers, Est European Italian contact centers, and many more.

In cases like these, Cisco Cognitive Contact Centers powered by Artificial Intelligence could deliver an Audio Transcription and Translation Agent Assistance Service meant to assist the agent in dealing with foreign languages in a more natural, quicker, and more productive way. Let’s see how in the following proof of concept example:

Watch the video:


This is the logical architecture used in the video.

Cisco Collaboration, Cisco Tutorial and Material, Cisco Online Exam, Cisco Prep, Cisco Certification

Transforming Customer Experience with Contact Center AI


The Contact Center business is going through a series of significant changes driven by the technology innovation, the raise of socials, and the new consumption models that are being evaluated by most of the companies.

From a technology angle, there is very little doubt that the advent of Artificial Intelligence is transforming traditional call centers into Cognitive Call Centers. This arrival is turning an IT cost into a business strategy tool to increase Customer Experience, achieving higher customer service levels and quality, increasing the productivity of agents, and even lifting their traditional role to the new one: customer ADVISORS and CONSULTANTS.

Cisco has a portfolio of on-premise, hybrid, and cloud contact center solutions. That covers the undergoing migration to Cloud and the demand for a versatile, open, consistent architecture across on-premises, hybrid, and cloud solutions able to grant a smooth transition to the broad base of existing customers and at the same time allowing a consistent innovation adding digital channels and artificial intelligence.

Saturday, 3 August 2019

How To Provision a Production-Grade Kubernetes Cluster From Anywhere, With Just One Button (Literally)

Do you remember?


I bet all of you who are working or playing with Kubernetes still remember perfectly the first time you tried to install it.

And then the second time.

And then the third time.



And finally, the one that it worked.

And most likely, if you’re a professional you also remember the long path that brought you to own the required expertise on Kubernetes in order to set up and fine-tune production-grade clusters to run apps.

Or, if Kubernetes is not part of your job’s scope, you probably remember how much time it took for you to find someone able to perform a valid Kubernetes install…and how much it costed.

To save all this time and effort to our customers Cisco released Cisco Container Platform (CCP), a turnkey solution to easily provision production-grade Kubernetes clusters on-prem or in the cloud in minutes, with few mouse clicks and requiring little to no knowledge of K8s. All the required integrations in terms of network, storage and security are done automatically by CCP so that the provisioned K8s clusters are ready to run in production.  Clusters provisioned by CCP are already equipped with properly-configured monitoring and logging tools like ElasticSearch, FluentD, Kibana. Through the Container Network Interface (CNI) you can choose whether to leverage Cisco ACI as network infrastructure or other ones such as Contiv or Calico (no dependence on the underlying infrastructure). With CCP you can take care of the full life-cycle of the K8s cluster: you can easily perform Kubernetes software upgrades, nodes upgrade, cluster scale up or down and cluster deletion.

This is already good and if you are following our cloud announcements you might already know this, so I thought I’d create a demo that may push the simplicity of those “few mouse clicks” to its limit, making possible to create a production-grade cluster in just one click – literally.

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certifications

Introducing the Kubernetes dash button.

The concept is fairly simple: build a dash button that, once pressed, creates a production-grade Kubernetes cluster ready to use.

Leveraging the rich set of the Cisco Container Platform (CCP) APIs this is even too easy, so I thought to add some more feature on top:

◈ I wanted to provision the cluster and access it just through the dash button. So, I wanted CCP to display on the dash button itself the IP address of the master node of the cluster created

◈ I wanted bi-directional communications between the dash-button and CCP itself, so that I can check on the dash button if CCP correctly received the provisioning request, and make sure that the provisioning process has started and then finished.

◈ I wanted a fair battery life that would avoid me having to recharge the button every day, so I needed to have electronics able to sleep or hibernate

◈ My lab, where I have the infrastructure and CCP, is behind a proxy, and therefore not accessible from the outside world, which meant I had to find a way to have my lab initiate communication with the dash button by actively checking the press of the button

◈ I wanted to use the button everywhere I go without worrying about the local Wi-Fi settings

How it works


To satisfy all the above requirements I added a couple of elements in the picture, ending up with the following architecture:

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certifications

The button is based on an Arduino ESP 32 board, it connects via Wi-Fi to my smartphone and uses its internet connection, this way I can use the button everywhere my phone has data signal. A publish-subscribe message service (MQTT) on the internet is used to bypass the proxy limitations. I hosted the MQTT at home but you can provision one on AWS or use a free MQTT service on the cloud. Once pressed, the button publishes a special message on the MQTT service. Inside my lab, a couple of scripts are constantly polling the MQTT service and, as soon as they detect the special message, they invoke the right API in the Cisco Container Platform to trigger the provisioning of a shiny new Kubernetes cluster. Once the cluster is provisioned, the IP address of the master node is returned, through the MQTT service, to the dash button that shows it on its display, and, at this point the Kubernetes cluster is ready to accept connections and run applications.

I went to town with it and added a 3D printed enclosure to complete my project; I initially downloaded an existing model but then I decided to  leverage the capabilities of CCP to deploy K8s clusters on-prem and  in the cloud, so I designed the two different enclosures as you can see in the picture below, so I can have two different dash buttons for the two different deployment targets. 

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certifications

Cisco Prep, Cisco Tutorial and Material, Cisco Learning, Cisco Certifications

Now, every time before I present my demo, I ask to my customers: “How much time and effort does it take you to install a production-grade, fully operationalized and secured Kubernetes cluster?” and whatever answer I get, I know I can answer “I can do it in 2 minutes blindfolded and cuffed”.

Wednesday, 10 October 2018

Challenge Your Inner Hybrid Creativity with Cisco and Google Cloud

In recent years, Kubernetes has risen up in popularity, especially with the developer community. And why do developers love Kubernetes? Because it offers incredible potential for speed, consistency, and flexibility for managing containers. But containers are not all sunshine and roses for enterprises – with big benefits come some big challenges. Nobody loves deploying, monitoring, and managing container lifecycles, especially across multiple public and private clouds. On top of that, there are many choices when it comes to environments, which can also create a lot of complexity – there are simply too many tools and too little standardization.

Production grade container environments powered by Kubernetes


That’s why earlier this year Cisco launched the Cisco Container Platform, a turnkey-solution for production grade container environments powered by Kubernetes. The Cisco Container Platform automates the repetitive functions and simplifies the complex ones so everyone can go back to enjoying the magic of containers. The Cisco Container Platform is a key element of Cisco’s overall container strategy and another way Cisco provides our customers with choices to various public clouds.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 1: Cisco Hybrid Cloud for Google Cloud

Hybrid cloud applications are the next big thing for developers


At the beginning of the year Cisco joined forces with Google Cloud on a hybrid cloud offering that, among other things, allows enterprises to deploy Kubernetes-based containers on-premises and securely connect with Google Cloud Platform.

In July at Google Cloud Next ’18, we kicked off the Cisco & Google Cloud Challenge.  (You still have until November 1, 2018 to enter the challenge and win prizes.) The idea behind it is to give developers a window into the possibilities for building hybrid cloud applications. Hybrid cloud applications are the next frontier for developers. There are so many innovation possibilities for the hybrid cloud infrastructure. That’s why we even named it “Two Clouds, infinite possibilities.”

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 2: Timeline for the Cisco & Google Cloud Challenge

An IoT edge use case for inspiration


Consider the following use case –assume we have a factory which generates a huge amount of data from sensors deployed across the physical building. We would like to analyze that data on-premises, but take advantage of cloud services in Google Cloud Platform for further analysis. This could include running predictive analysis with Machine Learning (ML) on that data (i.e., which machine part is going to break next). “Edge” here represents a generic class of use cases with these characteristics:

◈ Limited Network Bandwidth – Many manufacturing environments are remote, with limited bandwidth. Collecting data from hundreds of thousands of devices requires processing, buffering, and storage at the edge when bandwidth is limited. For instance, an offshore oil rig collects more than 50,000 data points per second, but less than 1% of this can be used in business decision making due to bandwidth constraints. Instead, analytics and logic can be applied at the edge, and summary decisions rolled up to the cloud.

◈ Data Separation & Partitioning – Often data from a single source needs to go to different and/or multiple locations or cloud services for analytics processing. Parsing the data at the edge to identify its final destination based on the desired analytics outcome allows you to route data more effectively, lower cloud costs and management overhead, and provide for the ability to route data based on compliance or data sovereignty needs. For example sending PCI, PII, or GDPR classified data to one cloud or service, while device or telemetry data routes to others. Additionally, data pre-processing can occur at the edge to munge data such as time series formats into aggregate, reducing complexity in the cloud.

◈ Data Filtering – Most data just isn’t interesting. But you don’t know that until you’ve received it at a cloud service and decide to drop it on the floor. For example, fire alarms send the most boring data 99.999% of the time. Until they send data that is incredibly important! There is often no need to store or forward this data until it is relevant to your business. Additionally, many data scientists now desire to run individually trained models at the edge, and if data no longer fits that model or is an exception, to send the entire data set to the cloud for re-training. Filtering with complex models also allows intelligent filtering at the edge that support edge decision making.

◈ Edge Decision Making & Model Training – Training and storing ML models directly at the edge allows storing ephemeral models that may otherwise not be possible due to compliance or data sovereignty requirements. These models can act on ephemeral data that is not stored or forwarded, but still garner information and outcomes that can then be sent to centralized locations. Alternatively, models can be trained centrally in the cloud and pushed to the edge to perform any of the other listed edge functions. And when data no longer fits that model (such as collecting long tail time-series data) the entire data set can be aggregated to the cloud for retraining, and the model re-deployed to the edge endpoints.

Google Cloud, Cisco Study Materials, Cisco Guides, Cisco Tutorial and Material, Cisco Learning

Figure 3: Hybrid Cloud, Edge Compute Use-case

As a real-life example, here in Cisco DevNet, we developed a use-case for doing Object Recognition using video streams from IP cameras. The video gateway at the edge analyzed the video streams in real-time, did object detection at the edge and passed the object to the Cisco Container Platform which further did object recognition. The recognized object, and all the associated meta-data, were stored at this layer. An application to query this data was written in the public cloud to track the path of the object.

Give the Cisco & Google Cloud Challenge a try


There’s no doubt about the popularity of Kubernetes in the developer community. Cisco Hybrid Cloud Platform for Google Cloud takes away the complexity of managing private clusters and lets developers concentrate on the things they want to innovate on. Start with our DevNet Sandbox for CCP, reserve your instance and test-drive it for yourself.

The Cisco & Google Cloud Challenge is an awesome way to brainstorm and solve some real customer problems and even win some prizes while you are at it. So, consider this blog as me inviting everyone to give the Challenge a try, and wishing you the very best! You have until Nov 1, 2018 to enter the challenge and win prizes.

Saturday, 6 October 2018

Enabling Enterprise-Grade Hybrid Cloud Data Processing with SAP and Cisco – Part 2

In part 1 of this blog series I talked about how data processing landscapes are getting more complex and heterogeneous creating roadblocks for customer who want to adopt truly hybrid cloud data applications. In the beginning of this year, Cisco and SAP decided to join forces and to bring the SAP Data Hub to the Cisco Container Platform. The goal is to provide a real end-to-end solution to help customers tackle the challenges described above and enable them to become a successful intelligent enterprise. We are focusing on providing a turn-key enterprise-scale solution that fosters a seamless interplay of powerful hardware and sophisticated software.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 1 Unified data integration and orchestration for enterprise data landscapes.

SAP brings into the game its novel data orchestration and refinery solution ‘SAP Data Hub’. The solution brings a number of features that allow customers to manage and process data in complex data landscapes involving on-premise systems and across multiple clouds. SAP Data Hub supports connecting the different systems in a landscape to a central hub to gain a first overview of all systems involved in data processing within a company. Above that the Data Hub is able to scan, profile and crawl those sources to retrieve the metadata and characteristics of the data stored in those sources. With that the SAP Data Hub provides a holistic data landscape overview in a central catalog and allows companies to answer the central questions about data positioning and governance.

Furthermore, the SAP Data Hub allows the definition of data pipelines that allow a data processing and landscape orchestration across all connected systems. Data pipelines consist of operators—small independent computation units—that form a joint computation graph. The functionality an operator provides can reach from very simple read operations and transformations (e.g. change the date format from US to EU), over interacting with a connected system, towards invoking a complex machine learning model. The operators are invoking their functionality and applying their transformations, while the data flows through the defined pipeline. This kind of data processing changes the paradigm of static, transactional ETL processes to a more dynamic flow-based data processing model.

With all of this functionality, we kept in mind that for being successful in bridging enterprise data and big data, we need to be open with respect to connecting not only SAP enterprise systems, but common systems used within the Big Data space (compare Figure 2). For this purpose, the SAP Data Hub is focusing on an open connectivity paradigm providing a huge number of connectors to different kinds of cloud and on-premise data management systems fostering the integration between enterprise data and big data.

All of that makes the SAP Data Hub a powerful enterprise application that allows customer to orchestrate and manage their complex system landscape. However, a solution like the Data Hub would be nothing without a powerful and flexible platform. Customers are increasingly turning towards containerized applications and Kubernetes as the orchestrator of choice, to handle the requirements to efficiently process large volumes of data. For this reason, it was a clear decision to move the SAP Data Hub also in this direction. The SAP Data Hub is completely containerized and uses Kubernetes as its platform and foundation.

Cisco Guides, Cisco Learning, Cisco Tutorial and Material, Cisco Certification, Cisco Study Materials

Figure 2 SAP & Cisco delivering turn-key solutions for complex enterprise data landscapes.

This is where Cisco with its advanced Cisco Container Platform (CCP) on its hyperconverged hardware solution Cisco Hyperflex comes into the game. Providing elastically scalable container clusters as a single turnkey solution covering on-premise and cloud environments with a single infrastructure stack is key for enterprise customer involved in big data analytics. With the Cisco Container Platform on Hyperflex 3.0 Cisco offers a fully integrated and flexible ‘container as a service’ offering with lifecycle management for hardware and software components. It provides a 100% upstream Kubernetes with integrated networking, system management and security. In addition, it utilizes modern technologies such as ‘istio’ and ‘Cloud Connect VPN’ to efficiently bridge on-premise and cloud services from different cloud providers. Accordingly, it accelerates a cloud-native transformation and application delivery in hybrid cloud enterprise environments, clearly embracing the multi-cloud world and helping to solve the multi-cloud challenges. Furthermore, the CCP allows to monitor the entire hardware and Kubernetes platform to allow customers to identify issues and non-beneficial usage patterns pro-actively and troubleshoot container clusters with fast pace.

Accordingly, the CCP is the perfect foundation for deploying the SAP Data Hub in complex, multi-cloud and hybrid cloud customer landscapes. We complemented the solution with Scality Ring an enterprise-ready scale-out file and object storage that fulfills major characteristics for production-ready usage; e.g. guaranteed reliability, availability and durability. This adds a data lake to the on-premise solution allowing price-efficient storage for mass data. In addition, we added network traffic load balancing with the advanced AVI Networks load balancers. They provide intelligent automation and monitoring for improved routing decisions. Both additions greatly benefit the CCP and complete it towards a full big data management and processing foundation.

With the release of the SAP Data Hub on the Cisco Container Platform running on Hyperflex 3.0 and complemented with Scality Ring and AVI Networks load balancers during SAP TechEd Las Vegas, customers will have the option to receive a turn-key, full-stack solution to tackle the challenges of modern enterprise data landscapes. They can start fast, they remain flexible and they receive full-stack support from Cisco’s world class engineering support and SAP’s advanced support services. Accordingly, SAP and Cisco together enable customers to win the race for the best data processing in the digital economy.

Wednesday, 6 June 2018

Microservices Deployments with Cisco Container Platform

Technological developments in the age of Industry 4.0 are accelerating some business sectors at a head-spinning pace. Innovation is fueling the drive for greater profitability. One way that tech managers are handling these changes is through the use of microservices, enabled by containers. And as usual, Cisco is taking advantage of the latest technologies.

From Cost Center to Profit Center


In this new world, IT departments are being asked to evolve from cost centers to profit centers. However, virtualization and cloud computing are not enough. New services developed in the traditional way often take too long to adapt to existing infrastructures.

Because of such short life cycles, IT professionals need the tools to implement these technologies almost immediately. Sometimes one company may have many cloud providers in a multicloud environment. Containers give IT managers the control they were used to in the data center.

Microservices and Containers


But what if you could break up these entangled IT resources into smaller pieces, then make them work independently on any existing platform? Developers find this new combination of Microservices and containers offers much greater flexibility and scalability. Containers offer significant advantages over mere virtualization. Containers supercharge today’s state-of-the-art hyperconverged platforms and they are cost-effective

A remaining challenge is to get companies to use containers. The adoption of a new technology often depends how easy it is to deploy. One of the early players in container technology is Kubernetes. But getting Kubernetes up and running can be a major task. You can do it the hard way using this tutorial from Kelsey Hightower. Or you can take the easy route, using the Google Container Engine (GKE).

Cisco Container Platform


Another easy-to-use solution is the Cisco Container Platform (CCP). Cisco’s takes advantage of the company’s robust hardware platforms and software orchestration capabilities. CCP uses reliable Cisco equipment that enable users to deploy Kubernetes, with options for adding cloud security, audit tools, and connectivity to hybrid clouds. Notice the growing popularity of the Kubernetes platform in the graph below:

Cisco Tutorials and Materials, Cisco Learning, Cisco Certifications, Cisco Microservice, Cisco Study Materials

Use Cases


Space does not permit the inclusion of all the potential use cases of Cisco Container Platform and its accompanying software solution. Here are just a few examples we would like to highlight:

#1: Kubernetes in your Data Center

For agility and scale, nothing beats native Kubernetes. Developers can easily deploy and run container applications without all the puzzle pieces required in traditional deployments. This means a new app can be up and running in minutes rather than days or weeks. Just create one or more Kubernetes clusters in Cisco Container Platform using the graphical user interface. If more capacity is needed for special purposes, simply add new nodes. CCP supports app lifecycle management with Kubernetes clusters and allows for continuous monitoring and logging.

#2: Multi-tier App Deployment Using Jenkins on Kubernetes

Developers are often frustrated because of the time it takes to get their applications into production using traditional methods. But these days it’s critical to get releases out fast. Using open-source solutions, Cisco Container Platform is able to create the continuous integration/continuous delivery (CI/CD) pipeline that developers are looking for. CCP takes advantage of Jenkins, an open-source automation server that runs on a Kubernetes engine.

BayInfotech (BIT) works closely with customers to implement these CI/CD integrations on the Cisco Container Platform. While it may seem complicated, once the infrastructure is set up and running, developers find it easy to create and deploy new code into the system.

#3: Hybrid Cloud with Kubernetes

The Cisco Container Platform makes it easier for customers to deploy and manage container-based applications across hybrid cloud environments. Currently, hybrid cloud environments are is being achieved between HyperFlex as an on-premises data center and GKE as a public cloud.

#4: Persistent Data with Persistent Volumes

Containers are not meant to retain data indefinitely. In the case of deletion, eviction or node failure, all container data may be lost. It involves the use of persistent volumes and persistent volume claims to store data. Further, when a container crashes for any reason, application data will be always retained on the persistent volume. Customer can reuse the persistent volumes to relaunch the application deployment so that customer will never lose the application data.

Monday, 19 March 2018

Why Contiv?

At our core, even as we expand into other cloud markets, Cisco is fundamentally a networking company and Contiv plays an important role as servant to that legacy in the microservices future that so many developers are gravitating towards.  As more about our relationship with Google becomes public, it is important to revisit this key component that solves a critical problem that faces anybody wanting to run container clusters at scale and in a way that can interact with existing infrastructure.

Sunday, 4 February 2018

Cisco Container Platform – Kubernetes for the Enterprise

Developed by Google to shepherd their in-house container clusters, Kubernetes has been vying for the attention and adoption of cloud architects. For the past several years, Docker Swarm, Mesos and Kubernetes have engaged in the duel to bring orchestration nirvana to containerized applications. While there are other participants in the fray and while Mesos has had a longer showing, Kubernetes seems to be capturing the pole position according to this research. This assertion is reinforced by the recent addition of support for Kubernetes to Apache Mesos, to Pivotal Container Service and to Cloud Foundry. The most recent admission of market realities is Docker’s seamless integration of Kubernetes into their Enterprise Edition offering. Whichever container orchestrator eventually arises as the de facto standard, it is clear that enterprises are looking for more and more infrastructure abstraction so they can laser focus on core business objectives.