The AI benchmarking group called MLCommons unveiled new test results on Monday that determine the speed of advanced devices in running artificial intelligence models.
NVIDIA’s chip came out on top in the tests conducted on one of the large language models, with Intel’s semiconductor coming in a close second.
The new MLPerf benchmark relies on an AI model with 6 billion parameters summarizing articles from CNN news network.
The benchmark simulates the “inference” part of AI data processing, which works on the foundational part that comes before generating text.
NVIDIA’s best performance for the inference benchmark depends on nearly eight of its H100 chips, indicating that the company has dominated the AI model training market but has yet to do so in the inference market.
Dave Salvator, Director of AI Compute Marketing at NVIDIA, stated, “What you’re seeing is that we’re delivering leadership performance in all of these areas, and I can confirm that we deliver that leadership performance in all workloads.”
Intel’s success relies on its Gaudi2 chips produced by the Habana unit it acquired in 2019, the Gaudi2 system was approximately 10% slower compared to NVIDIA’s system.
Eitan Medina, Chief Operating Officer at Habana, commented, “We’re very proud of the inference results, where we show the price performance advantage of the Gaudi2 system.”
Intel claims that its system is cheaper than NVIDIA’s and is available at almost the same price as the previous generation of NVIDIA systems, however, it refused to discuss the exact chip price.
NVIDIA declined to discuss the price of its chip but mentioned on Friday that it plans to release a software upgrade soon that will double the performance of its system in the MLPerf benchmark.