Nvidia DGX h100 Datasheet Nvidia Us Web
Nvidia DGX h100 Datasheet Nvidia Us Web
Nvidia DGX h100 Datasheet Nvidia Us Web
to help them achieve breakthroughs in natural language 2x dual-port QSFP112 NVIDIA ConnectX-7 VPI
> Up to 400Gb/s InfiniBand/Ethernet
processing, recommender systems, data analytics, and much more.
Available on-premises and through a wide variety of access and Management 10Gb/s onboard NIC with RJ45
network 100Gb/s Ethernet NIC
deployment options, DGX H100 delivers the performance needed
Host baseboard management controller
for enterprises to solve the biggest challenges with AI.
(BMC) with RJ45
companies large and small to fuel their innovation and optimize NVIDIA Base Command – Orchestration,
scheduling, and cluster management
their business. As the fourth generation of the world’s first
DGX OS / Ubuntu / Red Hat Enterprise Linux /
purpose-built AI infrastructure, DGX H100 is designed to be the Rocky – Operating System
centerpiece of an enterprise AI center of excellence. It’s a fully
Support Comes with 3-year business-standard
optimized hardware and software platform that includes full hardware and software support
support for the new range of NVIDIA AI software solutions, a rich
System weight 287.6lbs (130.45kgs)
ecosystem of third-party support, and access to expert advice
from NVIDIA professional services. DGX H100 offers proven Packaged system 376lbs (170.45kgs)
weight
reliability, with the DGX platform being used by thousands of
customers around the world spanning nearly every industry. System dimensions Height: 14.0in (356mm)
Width: 19.0in (482.2mm)
Length: 35.3in (897.1mm)
Break Through the Barriers to AI at Scale Operating 5–30°C (41–86°F)
As the world’s first system with the NVIDIA H100 Tensor Core temperature range