Thursday, 25 November 2021

Accelerating Analytics Workloads with Cloudera, NVIDIA, and Cisco

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Tutorial and Material, Cisco Certification, Cisco Preparation

As today’s leading companies utilize artificial intelligence/machine learning (AI/ML) to discover insights hidden in massive amounts of data, many are realizing the benefits of deploying in a hybrid or private cloud environment, rather than a public cloud. This is especially true for use cases with data sets larger than 2 TB or with specific compliance requirements.

In response, Cisco, Cloudera, and NVIDIA have partnered to deliver an on-premises big data solution that integrates Cloudera Data Platform (CDP) with NVIDIA GPUs running on the Cisco Data Intelligence Platform (CDIP).

Cisco Data Intelligence Platform: a journey to hybrid cloud

The CDIP is a thoughtfully designed private cloud that supports data lake requirements. CDIP as a private cloud is based on the new Cisco UCS M6 family of servers that support NVIDIA GPUs and third-generation Intel Xeon Scalable family processors with PCIe fourth-generation capabilities.

CDIP supports data-intensive workloads on the CDP Private Cloud Base. The CDP Private Cloud Base provides storage and supports traditional data lake environments, including Apache Ozone (a next-generation file system for data lake).

◉ CDIP built with the Cisco UCS C240 M6 Server for storage (Apache Ozone and HDFS), which supports CDP Private Cloud Base, extends the capabilities of the Cisco UCS rack server portfolio with third-generation Intel Xeon Scalable processors. It supports more than 43 percent more cores per socket and 33 percent more memory than the previous generation.

Cisco, Cisco Exam, Cisco Exam Prep, Cisco Exam Preparation, Cisco Tutorial and Material, Cisco Certification, Cisco Preparation

CDIP also supports compute-rich (AI/ML) and compute-intensive workloads with CDP Private Cloud Experiences—all while providing storage consolidation with Apache Ozone on the Cisco UCS infrastructure. The CDP Private Cloud Experiences provide different experience- or persona-based processing of workloads—data analyst, data scientist, and data engineer, for example—for data stored in the CDP Private Cloud Base.

◉ CDIP built with the Cisco UCS X-Series for CDP Private Cloud Experiences is a modular system that is adaptable and future-ready, meeting the needs of modern applications. The solution improves operational efficiency and agility at scale.

This CDIP solution is fully managed through Cisco Intersight. Cisco Intersight simplifies hybrid cloud management, and, among other things, moves server management from the network into the cloud.

Cisco also provides multiple Cisco Validated Designs (CVDs), which are available to assist in deploying this private cloud big data solution.

Integrating a big data solution to tackle AI/ML workloads


Increasingly, market-leading companies are recognizing the true transformational potential of AI/ML trained by their data. Data scientists are utilizing data sets on a magnitude and scale never seen before, implementing use cases such as transforming supply chain models, responding to increased levels of fraud, predicting customer churn, and developing new product lines. To be successful, data scientists need the tools and underlying processing power to train, evaluate, iterate, and retrain their models to obtain highly accurate results.

On the software side of such a solution, many data scientists and engineers rely on the CDP to create and manage secure data lakes and provide the machine learning-derived services needed to tackle the most common and important analytics workloads.

But to deploy the solution built with the CDP, IT also needs to decide where the underlying processing power and storage should reside. If processing power is too slow, the utility of the insights derived can diminish greatly. On the other hand, if costs are too high, the work is at risk of being cost-prohibitive and not funded at the outset.

Data set size a major consideration for big data AI/ML deployments


The sheer size of the data to be processed and analyzed has a direct impact on the cost and speed at which companies can train and operate their AI/ML models. Data set size can also heavily influence where to deploy infrastructure—whether in a public, private, or hybrid cloud.

Consider an autonomous driving use case for example. Working with a major automobile manufacturer, the Cisco Data Intelligence Platform ran a proof of concept (POC) that collects data from approximately 150 cars. Each car generates about 2 TB of data per hour, which collectively adds up to some 2 PB of data ingested every day and stored in the company’s data lake. The cost to move this data into a public cloud would be staggering, and, therefore, an on-premises, private cloud option makes more financial sense.

Furthermore, this data lake contains about 50 PB of hot data that is stored for a month and hundreds of petabytes of cold data that must also be stored.

Considering infrastructure performance


In addition, the performance of the underlying infrastructure in many AI/ML deployments matters. In our autonomous driving use case example, the POC requirement is to run more than a million and a half simulations each day. To provide enough compute performance to meet this requirement takes a combination of general-purpose CPU and GPU acceleration.

To meet this requirement, CDIP begins with top-of-the-line performance, as illustrated through TPC-xHS benchmarks. In addition, CDIP is available with integrated NVIDIA GPUs, delivering a GPU-accelerated data center to power the most demanding CDP workloads. To meet the performance requirements of this POC, 50,000 cores and accelerated compute nodes were utilized, provided by the CDIP solution deploying Cisco UCS rack servers.

Source: cisco.com

Related Posts

0 comments:

Post a Comment