NVIDIA plans to launch the B100 and B200 GPUs for CSP customers in the second half of 2024, with a scaled-down B200A version for enterprise clients, and the Blackwell series taking the lead in high-end GPU shipments by 2025.
- NVIDIA is committed to meeting the diverse needs of their customers by offering a range of GPUs for different applications.
- The B200A, a scaled-down version of the B200, is specifically designed for edge AI applications, showing NVIDIA's dedication to staying at the forefront of technology.
- The Blackwell series is predicted to drive the majority of NVIDIA's high-end GPU shipments, with an impressive annual growth rate of 55%.
In the world of tech, rumors can spread like wildfire. And recently, there have been whispers about nVidia canceling their B100 in favor of the B200A. But fear not, because according to TrendForce, NVIDIA is still planning to launch both the B100 and B200 in the second half of 2024. And they have their sights set on targeting CSP customers.
But what about the B200A? Well, it seems that NVIDIA has something special in store for their enterprise clients. They’re planning a scaled-down version of the B200, specifically designed for edge AI applications. This move shows NVIDIA’s commitment to meeting the diverse needs of their customers.
According to TrendForce, NVIDIA will be prioritizing the B100 and B200 for CSP customers with higher demand. This is due to the tight production capacity of CoWoS-L, a packaging technology used in these GPUs. Shipments are expected to start rolling out after the third quarter of 2024.
But wait, there’s more! NVIDIA is also planning the B200A for other enterprise clients. This version will utilize CoWoS-S packaging technology and is expected to have a lower thermal design power (TDP) compared to the B200. This means it can use air-cooling solutions with the GB rack, which will help mitigate any delays caused by the complexities of liquid-cooling designs in 2025. The B200A will come equipped with 4 HBM3E 12hi memory modules, offering a total capacity of 144 GB. OEMs can expect to receive these chips after the first half of 2025, ensuring a smooth transition and broader market adoption.
Now, let’s talk about the Blackwell series. TrendForce predicts that this will be the driving force behind NVIDIA’s high-end GPU shipments in the coming years. In 2024, the Hopper platform will take the lead, with models like the H100 and H200 being shipped to North American CSPs and OEMs. Chinese customers, on the other hand, will primarily receive AI servers equipped with the H20.
But by 2025, Blackwell will take center stage. The high-performance B200 and GB200 rack will meet the demands of CSPs and OEMs for high-end AI servers. The B100, which serves as a transitional product with lower power consumption, will gradually be replaced by the B200, B200A, and GB200 rack after fulfilling existing CSP orders.
TrendForce estimates that by 2025, the Blackwell platform will account for over 80% of NVIDIA’s high-end GPU shipments. This will drive the annual growth rate of NVIDIA’s high-end GPU series shipments to an impressive 55%.
So, there you have it. NVIDIA is staying true to their roadmap and has some exciting things in store for their customers. With the B100, B200, and B200A on the horizon, as well as the Blackwell series taking the lead, NVIDIA is set to make a big splash in the world of high-end GPUs. Stay tuned for more updates as we approach the launch dates.
About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
Technology Explained
Blackwell: Blackwell is an AI computing architecture designed to supercharge tasks like training large language models. These powerful GPUs boast features like a next-gen Transformer Engine and support for lower-precision calculations, enabling them to handle complex AI workloads significantly faster and more efficiently than before. While aimed at data centers, the innovations within Blackwell are expected to influence consumer graphics cards as well
Latest Articles about Blackwell
CoWoS: CoWoS, or Chip-on-Wafer-on-Substrate, is a recent advancement in chip packaging that allows for more powerful processors in a compact size. This technology stacks multiple chips on a silicon interposer, enabling denser connections and improved performance. Developed for high-performance computing, CoWoS promises faster processing, lower power consumption, and the ability to pack more processing power into smaller devices.
Latest Articles about CoWoS
GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.
Latest Articles about GPU
HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.
Latest Articles about HBM3E
Trending Posts
NVIDIA DLSS 3 Expands Its Reach to Additional Games This Week
CHIEFTEC introduces Visio and Visio AIR: Dual-Chamber ATX PC Cases Redefined
NVIDIA and Microsoft Unveil Blackwell, Omniverse AI, and RTX PCs at Ignite Event
ASUS Republic of Gamers introduces the New ROG Phone 9 Lineup
Renesas Launches First Comprehensive Chipset for Next-Gen DDR5 Server MRDIMMs
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer