A famous line: “I feel the need–the need for speed!” When I talk to data scientists, they often express the same sentiment. Charged with the responsibility of mining value out of numerous data sources, they often feel that they need ever more speed to do their jobs quickly. To support the data scientists, IT teams are always looking for performance, scale, and flexibility. Clearly, the need for accelerating artificial intelligence and machine learning performance is critical to enterprise’s success.
In anticipation of Cisco Live Barcelona, I am excited to share some of the recent development on the UCS team to help customers to accelerate AI/ML deployment.
First, Cisco has published the performance characteristics of the UCS C480 ML. With 8 x NVIDIA V100 Tesla GPUs, this server is optimized for deep learning workloads as demonstrated with the various popular convolutional neural networks for image classification.
Performance
In anticipation of Cisco Live Barcelona, I am excited to share some of the recent development on the UCS team to help customers to accelerate AI/ML deployment.
First, Cisco has published the performance characteristics of the UCS C480 ML. With 8 x NVIDIA V100 Tesla GPUs, this server is optimized for deep learning workloads as demonstrated with the various popular convolutional neural networks for image classification.
C480 ML TensorFlow Training Performance
Scale
While the performance of a single server is important, customers are demanding that a cluster of servers to scale to increasing demands. At Cisco Live Barcelona, we are looking forward to show 2 demos.
NGC on Hortonworks 3
Many customers already have a big data lake, and almost everyone is looking to mine additional value out of the data. With Hortonworks 3, the Hadoop cluster is able to support containers running on CPUs and GPUs. This unified cluster architecture enables customers to do traditional data extract, transform, and load (ETL) before feeding the data to a deep learning algorithm. Since Cisco announced intent to support NVIDIA NGC-Ready program, which provides GPU enabled software containers, Cisco Live Barcelona will be the first time where Cisco demonstrate NGC containers running on UCS, in this case, as part of the Hortonworks cluster. In short, the Hortonworks Hadoop cluster is able to support the complete data pipeline.
NGC on Hortonworks 3
NGC on Red Hat OpenShift
While some customers prefer Hadoop as the clustering tool, others are choosing Red Hat OpenShift as the Kubernetes container orchestrator with enterprise support. In Barcelona, we will also show NGC running on OpenShift enabling support for multiple data scientists with both interactive and batch workloads.
NGC on Red Hat OpenShift
Flexibility
Kubeflow Pipelines
While many customers have the bulk of their data on-premise, some data scientists would like to do machine learning experiments in the cloud. In November, 2018, Google announced Kubeflow Pipelines. At Cisco Live Barcelona, in partnership with Google, Cisco will demonstrate Kubeflow Pipelines running on both Google Cloud and UCS enabling a true hybrid cloud experience.
Inferencing on Hyperflex 4.0
Inferencing is also a critical part of machine learning. At Barcelona, we are also showcasing HyperFlex 4.0 as a edge computing system in a retail environment where video streams can be used for customer sentiment analysis.
HyperFlex 4.0 for Inferencing on the Edge
Cisco Live Barcelona 2019
I can’t wait to share with all of you the UCS solutions that can accelerate artificial intelligence and machine learning adoption. With the raw performance of the UCS C480 ML, scale with Hortonworks 3 and Red Hat OpenShift, and flexibility with Kubeflow and inferencing on HyperFlex 4.0, I am looking forward to help our customers to make a genuine difference with the business problems.