Nvidia and Intel show machine learning performance gains on latest MLPerf Training 2.1 results

by | Nov 9, 2022 | Technology

Join us on November 9 to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers at the Low-Code/No-Code Summit. Register here.

MLCommons is out today with its latest set of machine learning (ML) MLPerf benchmarks, once again showing how hardware and software for artificial intelligence (AI) are getting faster.

MLCommons is a vendor-neutral organization that aims to provide standardized testing and benchmarks to help evaluate the state of ML software and hardware. Under the MLPerf testing name, MLCommons collects different ML benchmarks multiple times throughout the year. In September, the MLPerf Inference results were released, showing gains in how different technologies have improved inference performance.

Today, the new MLPerf benchmarks being reported include the Training 2.1 benchmark, which is for ML training; HPC 2.0 for large systems including supercomputers; and Tiny 1.0 for small and embedded deployments.

“The key reason why we’re doing benchmarking is to drive transparency and measure performance,” David Kanter, executive director of MLCommons, said during a press briefing. “This is all predicated on the key notion that once you can actually measure something, you can start thinking about how you would improve it.”

Event
Low-Code/No-Code Summit
Learn how to build, scale, and govern low-code programs in a straightforward way that creates success for all this November 9. Register for your free pass today.

Register Here

How the MLPerf training benchmark works

Looking at the training benchmark in particular, Kanter said that MLPerf isn’t just about hardware, it’s about software too.

In ML systems, models need to first be trained on data in order to operate. The training process benefits from accelerator hardware, as well as optimized software. 

Kanter explained that the MLPerf Training benchmark starts with a predetermined …

Article Attribution | Read More at Article Source

Share This