Nvidia’s H200 is the new must-have GPU for AI

Nvidia introduces a new high-end chip for AI work, the HGX H200. New GPU upgrade which is in high demand H100 with 1.4x more memory bandwidth and 1.8x more memory capacity, improving its ability to handle intensive generative AI work.
The big question is whether companies will be able to get their hands on the new chips or whether their supply will be as limited as the H100 – and Nvidia doesn’t really have an answer to that question. The first H200 chips will be released in the second quarter of 2024, and Nvidia says it works with “global system manufacturers and cloud service providers” to make them available.
The H200 appears to be much the same as the H100 apart from its memory. But the changes to its memory are a significant upgrade. The new GPU is the first to use a new, faster memory specification called HBM3e. This brings the GPU’s memory bandwidth to 4.8 terabytes per second, up from 3.35 terabytes per second on the H100, and its total memory capacity to 141 GB, up from 80 GB of its predecessor.
“Integrating faster and larger HBM memory helps accelerate performance in computationally intensive tasks, including generative and AI models. [high-performance computing] applications while optimizing GPU utilization and efficiency,” Ian Buck, Nvidia’s vice president of high-performance computing products, said in a video presentation this morning.
The H200 is also designed to be compatible with the same systems that already support the H100s. Nvidia says cloud providers won’t need to make any changes when they add H200s to the mix. The cloud arms of Amazon, Google, Microsoft and Oracle will be among the first to offer the new GPUs next year.
Once launched, the new chips will certainly be expensive. Nvidia doesn’t specify how much they cost, but CNBC reports that previous generation H100s are estimated to sell for between $25,000 and $40,000 each, and that thousands of them are needed to perform at the highest levels. The edge has has contacted Nvidia for more details on pricing and availability of the new chips.
Nvidia’s announcement comes as AI companies remain desperate for its H100 chips. Nvidia’s chips are considered the best option for efficiently processing the enormous amounts of data needed to train and operate generative image tools and large language models. The tokens are valuable enough that businesses are use them as collateral for loans. Who has H100s, it’s the subject of Silicon Valley gossipand startups have worked together just to share any access to them.
Next year is shaping up to be a better time for GPU buyers. In August, THE Financial Times reported that Nvidia planned to triple its production of H100s in 2024. The goal was to produce up to 2 million next year, up from around 500,000 in 2023. But with generative AI as explosive today as in At the start of each year, demand may only increase – and that’s before Nvidia adds a new, even more capable chip to the mix.