Nvidia claims its H100 data center GPU is 4.5 times faster than previous generation

Nvidia has issued a press release announcing that its next-generation Hopper architecture-based data center GPU, the H100 (H stands for Hopper), outperforms the previous-generation A100 (based on Ampere architecture) by 4.5 times on the MLPerf industry-standard AI benchmark. Nvidia’s press release, full of buzzwords, states that “Hopper’s performance on the popular BERT model for natural language processing is due in part to its Transformer Engine. BERT is the largest and most performance-demanding of MLPerf AI models. One.” Still in development, the H100 will be available later this year and will replace the A100 as the company’s flagship data center GPU.

This article is reprinted from: https://www.solidot.org/story?sid=72733
This site is for inclusion only, and the copyright belongs to the original author.

Leave a Comment