NVIDIA introduces Blackwell Architecture and B200/B100 Accelerators: Revolutionizing Data Efficiency


March 19, 2024 by our News Team

NVIDIA is set to launch its next-generation AI accelerator, Blackwell, which aims to enhance performance, flexibility, and transistor count to meet growing AI demands.

  • Next-generation architecture with improved performance, flexibility, and transistor count
  • Named after mathematician and statistician, Dr. David Harold Blackwell
  • Utilizes chiplet design for increased transistor count and improved performance


nVidia, the leading player in the generative AI accelerator market, is gearing up to launch its next-generation accelerator architecture, Blackwell. Building on the success of its H100/H200/GH200 series, Blackwell aims to further enhance performance, flexibility, and transistor count to meet the growing demands of AI applications.

Named after Dr. David Harold Blackwell, a mathematician and statistician, Blackwell represents NVIDIA’s commitment to pushing the boundaries of architectural design. The company’s strategy of identifying industry trends and customer needs, investing in high-performance hardware, and optimizing every aspect of chip design has proven successful with previous architectures like Hopper and Ampere.

While specific details about Blackwell are still under wraps, NVIDIA has provided some insights into its key features. The Blackwell GPU will be a large chip, featuring two reticle-sized dies on a single package. This chiplet design allows for increased transistor count and improved performance. Despite not utilizing a node like TSMC’s 3nm-class, NVIDIA aims to achieve efficiency gains through architectural enhancements and scaling-out.

Memory capacity has been a limiting factor for AI accelerators, and Blackwell addresses this by incorporating four stacks of HBM3E memory per die, resulting in a total memory bus width of 8192-bits. With 192GB of HBM3E memory and an aggregate memory bandwidth of 8TB/second, Blackwell offers significant improvements over its predecessors.

In terms of performance, NVIDIA is targeting a 4x increase in training performance and a massive 30x increase in inference performance compared to the H100. The company also aims to achieve these gains while delivering 25x greater energy efficiency. To accomplish this, Blackwell will feature a second-generation Transformer Engine that supports lower precision formats like FP4 for inference and FP8 for training.

While specific details about FP64 tensor performance and TDP are yet to be revealed, NVIDIA’s focus on low precision AI suggests a strong emphasis on inference workloads. The company is expected to unveil more information during its keynote address.

Overall, Blackwell represents NVIDIA’s continued commitment to innovation in the AI accelerator market. By leveraging its expertise in architectural design and pushing the boundaries of performance, NVIDIA aims to maintain its position as a leader in the industry.

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About nVidia: NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.

nVidia website  nVidia LinkedIn

About TSMC: TSMC, or Taiwan Semiconductor Manufacturing Company, is a semiconductor foundry based in Taiwan. Established in 1987, TSMC is a important player in the global semiconductor industry, specializing in the manufacturing of semiconductor wafers for a wide range of clients, including technology companies and chip designers. The company is known for its semiconductor fabrication processes and plays a critical role in advancing semiconductor technology worldwide.

TSMC website  TSMC LinkedIn

Technology Explained


chiplet: Chiplets are a new type of technology that is revolutionizing the computer industry. They are small, modular components that can be used to build powerful computing systems. Chiplets are designed to be used in combination with other components, such as processors, memory, and storage, to create a complete system. This allows for more efficient and cost-effective production of computers, as well as more powerful and versatile systems. Chiplets can be used to create powerful gaming PCs, high-end workstations, and even supercomputers. They are also being used in the development of artificial intelligence and machine learning applications. Chiplets are an exciting new technology that is changing the way we build and use computers.


GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.


HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.





Leave a Reply