HPE and NVIDIA collaborate to deliver the NVIDIA GB200 NVL72, a rack-scale system with advanced liquid cooling technology, to empower AI service providers and enterprises with scalability, performance, and fast deployment capabilities.
- Empowers service providers and large enterprises to swiftly deploy expansive and intricate AI clusters
- Maximizes efficiency and performance with advanced direct liquid cooling technologies
- Offers unmatched support and services, including on-site engineering resources, performance and benchmarking engagements, and sustainability services
HPE Rolls Out Its First nVidia Blackwell Solution
Today marks an exciting milestone for Hewlett Packard Enterprise (HPE) as it announces the shipment of its first NVIDIA Blackwell family-based solution, the NVIDIA GB200 NVL72. This rack-scale system is engineered to empower service providers and large enterprises to swiftly deploy expansive and intricate AI clusters, all while leveraging advanced direct Liquid Cooling technologies. The goal? To maximize efficiency and performance in a world where speed is everything.
Why This Matters for AI Service Providers
Trish Damkroger, HPE’s senior vice president and general manager of HPC & AI Infrastructure Solutions, puts it succinctly: “AI service providers and large enterprise model builders are under tremendous pressure to offer scalability, extreme performance, and fast time-to-deployment.” With HPE’s track record of building some of the fastest systems globally, they’re not just talking the talk. They’re delivering solutions that promise lower costs per token training and top-tier performance, backed by unmatched services expertise.
A Deep Dive into the NVIDIA GB200 NVL72
So, what makes the NVIDIA GB200 NVL72 stand out? For starters, it boasts a shared-memory, low-Latency architecture that’s tailored for extremely large AI models—think over a trillion parameters—all residing in a single memory space. This system integrates NVIDIA CPUs, GPUs, compute and switch trays, networking, and software seamlessly, making it a powerhouse for heavily parallelizable workloads like generative AI (GenAI) model training and inferencing.
Bob Pette, vice president of enterprise platforms at NVIDIA, emphasizes the necessity of liquid cooling technology. As compute requirements skyrocket, the collaboration between HPE and NVIDIA becomes even more critical. The GB200 NVL72 is designed to help enterprises build, deploy, and scale large AI clusters efficiently.
HPE’s Liquid Cooling Expertise
With over five decades of experience in liquid cooling, HPE is uniquely equipped to tackle the escalating power demands and data center density challenges of today. This expertise has led to HPE delivering eight of the top fifteen supercomputers on the Green500 list, which ranks the most energy-efficient supercomputers globally. Notably, they’ve built seven of the world’s top ten fastest supercomputers, establishing themselves as a leader in direct liquid cooling technology.
Impressive Specs of the NVIDIA GB200 NVL72
Let’s break down what’s under the hood of this impressive machine:
–
72 NVIDIA Blackwell GPUs
and36 NVIDIA Grace CPUs
connected through high-speed NVIDIA NVLink– Up to
13.5 TB
of total HBM3E memory with a staggering576 TB/sec bandwidth
– State-of-the-art HPE direct liquid cooling technology
Unmatched Support and Services
HPE doesn’t just stop at delivering hardware; they provide comprehensive support tailored to the needs of their customers. With a proven ability to manage massive, custom AI clusters, HPE offers superior serviceability that includes on-site support, customized services, and sustainability initiatives. Their HPC & AI Custom Support Services are designed to adapt to various customer requirements.
Some of the standout services include:
–
Onsite engineering resources:
Highly trained engineers work alongside your IT team to ensure optimal system performance.–
Performance and benchmarking engagements:
HPE’s expert team fine-tunes solutions throughout the system’s lifespan.–
Sustainability services:
They offer energy and emissions reporting, sustainability workshops, and resource monitoring to help mitigate environmental impact.The Future of AI Computing
The newly shipped NVIDIA GB200 NVL72 by HPE is just one piece of a much larger puzzle in the realm of high-performance computing and supercomputing systems. It’s designed to address a wide array of use cases, from generative AI to scientific discovery and other compute-intensive workloads.
Curious to learn more about what HPE has to offer? Dive into their NVIDIA AI Computing portfolio and discover a world of solutions tailored to meet the demands of today’s tech landscape. The future of AI computing is here, and it’s looking brighter than ever.

About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
Technology Explained
Blackwell: Blackwell is an AI computing architecture designed to supercharge tasks like training large language models. These powerful GPUs boast features like a next-gen Transformer Engine and support for lower-precision calculations, enabling them to handle complex AI workloads significantly faster and more efficiently than before. While aimed at data centers, the innovations within Blackwell are expected to influence consumer graphics cards as well
Latest Articles about Blackwell
HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.
Latest Articles about HBM3E
HPC: HPC, or High Performance Computing, is a type of technology that allows computers to perform complex calculations and process large amounts of data at incredibly high speeds. This is achieved through the use of specialized hardware and software, such as supercomputers and parallel processing techniques. In the computer industry, HPC has a wide range of applications, from weather forecasting and scientific research to financial modeling and artificial intelligence. It enables researchers and businesses to tackle complex problems and analyze vast amounts of data in a fraction of the time it would take with traditional computing methods. HPC has revolutionized the way we approach data analysis and has opened up new possibilities for innovation and discovery in various fields.
Latest Articles about HPC
Latency: Technology latency is the time it takes for a computer system to respond to a request. It is an important factor in the performance of computer systems, as it affects the speed and efficiency of data processing. In the computer industry, latency is a major factor in the performance of computer networks, storage systems, and other computer systems. Low latency is essential for applications that require fast response times, such as online gaming, streaming media, and real-time data processing. High latency can cause delays in data processing, resulting in slow response times and poor performance. To reduce latency, computer systems use various techniques such as caching, load balancing, and parallel processing. By reducing latency, computer systems can provide faster response times and improved performance.
Latest Articles about Latency
Liquid Cooling: Liquid cooling is a technology used to cool down computer components, such as processors, graphics cards, and other components that generate a lot of heat. It works by circulating a liquid coolant, such as water or a special coolant, through a series of pipes and radiators. The liquid absorbs the heat from the components and then dissipates it into the air. This technology is becoming increasingly popular in the computer industry due to its ability to provide more efficient cooling than traditional air cooling methods. Liquid cooling can also be used to overclock components, allowing them to run at higher speeds than their rated speeds. This technology is becoming increasingly popular in the gaming industry, as it allows gamers to get the most out of their hardware.
Latest Articles about Liquid Cooling
Trending Posts
ASRock Addresses AMD Platform Issues: No Boot and CPU Damage Concerns Explained
Genetic testing giant 23andMe files for bankruptcy in the US
NVIDIA’s Latest Update Introduces Project G-Assist and Enhanced DLSS Customization Options
Windows 11 gets new update, enhancing Copilot+ system capabilities
New Android Malware Evades Detection with Sneaky New Technique
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
Intel introduces Impressive 15th Gen Core i7-15700K and Core i9-15900K: Release Date Imminent