With the GPU Technology Conference underway in San Jose, California, Lenovo is excited to announce that we are expanding our support of NVIDIA® Tesla® V100 GPUs, based on the Volta architecture, to include the newly-announced 32GB platform. Our focus on the burgeoning Artificial Intelligence (AI) market necessitates that we provide our customers with powerful tools such as the Tesla V100 32GB GPU to run their deep learning, (DL), machine learning, (ML), and inference workloads. In High Performance Computing, (HPC) our customers will put the Tesla V100 32GB to work powering their research on Lenovo ThinkSystem servers, as they try to solve some of humanity’s greatest challenges.
Beginning in June, Lenovo will support the Tesla V100 32GB on our ultra-dense ThinkSystem SD530, which was named 2017 HPCWire Reader’s Choice winner for Best HPC Server, and on our best-selling ThinkSystem SR650 rack system. This will give customers quality ThinkSystem platforms across HPC, Machine Learning, Deep Learning and Inference workloads, enhanced with NVIDIA’s highest performance GPU.
Many of today’s larger, and more complex, Deep Learning (DL) models can take days or even weeks to train. Customers working with our AI Innovation Centers to develop and test their ML and DL models have experienced these delays with larger datasets. By doubling the memory capacity of the Tesla V100 to 32GB, customers can increase the training batch size being run and substantially reduce training time, accelerating time to insight by decreasing their time to train.
This higher memory configuration allows HPC applications to run larger simulations more efficiently than ever before. Especially HPC applications that are memory-constrained. For example, large 3D FFT calculations, which are commonly used in seismic, climate, and signal processing applications are up to 50% faster with the Tesla V100 32GB.
Making AI & HPC Easier
Making life easier for the person running the models and the datasets is a primary goal for Lenovo. The Tesla V100 32GB’s larger memory capacity can save time for the data scientist by eliminating much of the tuning and balancing needed with GPUs with smaller memory capacities to improve their performance.
We are intensely focused on making AI easier to implement, saving our customers time and, consequently, money. Lenovo intelligent Computing Orchestrator (LiCO), is a prime example, simplifying and automating the entire AI software and hardware stack including GPUs. Learn more about LiCO.
Lenovo is excited about partnering with NVIDIA to bring the power of the new NVIDIA TeslaV100 32GB GPU to our AI and HPC customers.