MLPerf 4.0 training results show up to 80% in AI performance gains

by | Jun 12, 2024 | Technology

It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More

Innovation in machine learning and AI training continues to accelerate, even as more complex generative AI workloads come online.

Today MLCommons released the MLPerf 4.0 training benchmark, once again showing record levels of performance. The MLPerf training benchmark is a vendor neutral standard that enjoys broad industry participation. The MLPerf Training suite measures performance of full AI training systems across a range of workloads. Version 4.0 included over 205 results from 17 organizations. The new update is the first MLPerf training results release since MLPerf 3.1 training in November 2023.

The MLPerf 4.0 training benchmarks include results for image generation with Stable Diffusion and Large Language Model (LLM) training for GPT-3. With the MLPerf 4.0 training benchmarks are a number of first time results including a new LoRA benchmark that fine-tunes the Llama 2 70B language model on document summarization using a parameter-efficient approach.

As is often the case with MLPerf results, when comparing even to just six months ago, there is significant gain.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

“Even if you look at relative to the last cycle, some of our benchmarks have gotten nearly 2x better performance, in particular Stable Diffusion,” MLCommons founder and executive director David Kanter said in a press briefing. “So that’s pretty impressive in six months.”

The actual gain for Stable Diffusion training is 1.8x faster vs November 2023, while training for GPT-3 was up to 1.2x faster.

AI training performance isn’t just about hardware

There are many factors that go into training an AI model.

While hardware is important, so too is software as well as the network that connects clusters together.

“Particularly for AI training, we have access to many different lea …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nn
It’s time to celebrate the incredible women leading the way in AI! Nominate your inspiring leaders for VentureBeat’s Women in AI Awards today before June 18. Learn More

Innovation in machine learning and AI training continues to accelerate, even as more complex generative AI workloads come online.

Today MLCommons released the MLPerf 4.0 training benchmark, once again showing record levels of performance. The MLPerf training benchmark is a vendor neutral standard that enjoys broad industry participation. The MLPerf Training suite measures performance of full AI training systems across a range of workloads. Version 4.0 included over 205 results from 17 organizations. The new update is the first MLPerf training results release since MLPerf 3.1 training in November 2023.

The MLPerf 4.0 training benchmarks include results for image generation with Stable Diffusion and Large Language Model (LLM) training for GPT-3. With the MLPerf 4.0 training benchmarks are a number of first time results including a new LoRA benchmark that fine-tunes the Llama 2 70B language model on document summarization using a parameter-efficient approach.

As is often the case with MLPerf results, when comparing even to just six months ago, there is significant gain.

VB Transform 2024 Registration is Open

Join enterprise leaders in San Francisco from July 9 to 11 for our flagship AI event. Connect with peers, explore the opportunities and challenges of Generative AI, and learn how to integrate AI applications into your industry. Register Now

“Even if you look at relative to the last cycle, some of our benchmarks have gotten nearly 2x better performance, in particular Stable Diffusion,” MLCommons founder and executive director David Kanter said in a press briefing. “So that’s pretty impressive in six months.”

The actual gain for Stable Diffusion training is 1.8x faster vs November 2023, while training for GPT-3 was up to 1.2x faster.

AI training performance isn’t just about hardware

There are many factors that go into training an AI model.

While hardware is important, so too is software as well as the network that connects clusters together.

“Particularly for AI training, we have access to many different lea …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]

Share This