The NVIDIA GB200 Grace Blackwell Superchip combines two Blackwell Tensor Core GPUs with a Grace CPU. Acting as a single GPU, 36 GB200 units form the NVIDIA GB200 NVL72 cluster, delivering 30X faster real-time trillion-parameter LLM inference than the previous generation.
The industry's fastest AI infrastructure designed to maximize performance for GenAI training & inference
Key features:
Supercharged training and real-time inference for more sustainable computing with astonishing results compared to NVIDIA Hopper generation.
Training on LLMs
vs. H100
LLM inference
vs. H100
Energy efficiency
vs. H100
With Genesis Cloud as a partner, you’re using state-of-the-art technology and we also stand by your side with direct access to our expert solution architects, infrastructure and machine learning engineers.