- The integration of the Grace Arm CPU in NVIDIA's Blackwell platform will revolutionize the high-end GPU market.
- The adoption of HBM3e technology by both NVIDIA and AMD will greatly enhance computational efficiency and system bandwidth in AI servers.
- The increase in HBM capacity and the evolution to 12Hi configurations will significantly improve performance in upcoming GPUs.
nVidia’s upcoming Blackwell platform is set to revolutionize the high-end GPU market with its integration of the Grace ARM CPU. The GB200 model, in particular, is generating a lot of excitement in the supply chain, with projections suggesting it could make up a significant portion of NVIDIA’s high-end GPU shipments by 2025.
However, the launch of products like the GB200 and B100 will face some challenges. The adoption of more complex wafer packaging technology and the optimization of AI server systems will require time-consuming validation and testing processes. As a result, significant production volumes of these products are not expected until late 2024 or early 2025.
The inclusion of the GB200, B100, and B200 in NVIDIA’s B-series lineup will also drive demand for CoWoS capacity. TSMC, one of the leading chip manufacturers, will need to increase its total CoWoS capacity by nearly 150% year-over-year by the end of 2024 to meet this demand. By 2025, the capacity could nearly double, with NVIDIA accounting for more than half of it.
While NVIDIA remains a dominant player in the GPU market, other suppliers like Amkor and Intel are focusing on different technologies and targeting NVIDIA’s H-series. Unless these suppliers secure additional orders beyond NVIDIA, their expansion plans will likely remain conservative.
Looking ahead, TrendForce predicts that HBM3E will become the mainstream memory technology in the GPU market by the second half of this year. Both NVIDIA and AMD are expected to transition from HBM3 to HBM3e in their primary GPU products. This shift will enhance computational efficiency and system bandwidth in AI servers.
Additionally, there will be an increase in HBM capacity across both NVIDIA and AMD GPUs. The current standard of 80 GB is expected to rise to between 192 GB and 288 GB by the end of 2024. AMD’s upcoming MI300A will start at 128 GB and also reach up to 288 GB.
Furthermore, the GPUs equipped with HBM3e will evolve from 8Hi configurations to 12Hi configurations. NVIDIA’s B100 and GB200 currently feature 8Hi HBM3e with a capacity of 192 GB, but the planned B200 model in 2025 will have a 12Hi HBM3e, achieving 288 GB. AMD’s MI350 and MI375 series will also adopt the 12Hi configuration, offering the same capacity.
In summary, NVIDIA’s Blackwell platform and the adoption of HBM3e by both NVIDIA and AMD are set to reshape the high-end GPU market. These advancements in technology and memory capacity will significantly enhance AI server performance and computational efficiency in the coming years.
About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About AMD:
AMD, a large player in the semiconductor industry is known for its powerful processors and graphic solutions, AMD has consistently pushed the boundaries of performance, efficiency, and user experience. With a customer-centric approach, the company has cultivated a reputation for delivering high-performance solutions that cater to the needs of gamers, professionals, and general users. AMD's Ryzen series of processors have redefined the landscape of desktop and laptop computing, offering impressive multi-core performance and competitive pricing that has challenged the dominance of its competitors. Complementing its processor expertise, AMD's Radeon graphics cards have also earned accolades for their efficiency and exceptional graphical capabilities, making them a favored choice among gamers and content creators. The company's commitment to innovation and technology continues to shape the client computing landscape, providing users with powerful tools to fuel their digital endeavors.Latest Articles about AMD
About ARM:
ARM, originally known as Acorn RISC Machine, is a British semiconductor and software design company that specializes in creating energy-efficient microprocessors, system-on-chip (SoC) designs, and related technologies. Founded in 1990, ARM has become a important player in the global semiconductor industry and is widely recognized for its contributions to mobile computing, embedded systems, and Internet of Things (IoT) devices. ARM's microprocessor designs are based on the Reduced Instruction Set Computing (RISC) architecture, which prioritizes simplicity and efficiency in instruction execution. This approach has enabled ARM to produce highly efficient and power-saving processors that are used in a vast array of devices, ranging from smartphones and tablets to IoT devices, smart TVs, and more. The company does not manufacture its own chips but licenses its processor designs and intellectual property to a wide range of manufacturers, including Qualcomm, Apple, Samsung, and NVIDIA, who then integrate ARM's technology into their own SoCs. This licensing model has contributed to ARM's widespread adoption and influence across various industries.Latest Articles about ARM
About Intel:
Intel Corporation, a global technology leader, is for its semiconductor innovations that power computing and communication devices worldwide. As a pioneer in microprocessor technology, Intel has left an indelible mark on the evolution of computing with its processors that drive everything from PCs to data centers and beyond. With a history of advancements, Intel's relentless pursuit of innovation continues to shape the digital landscape, offering solutions that empower businesses and individuals to achieve new levels of productivity and connectivity.Latest Articles about Intel
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
About TSMC:
TSMC, or Taiwan Semiconductor Manufacturing Company, is a semiconductor foundry based in Taiwan. Established in 1987, TSMC is a important player in the global semiconductor industry, specializing in the manufacturing of semiconductor wafers for a wide range of clients, including technology companies and chip designers. The company is known for its semiconductor fabrication processes and plays a critical role in advancing semiconductor technology worldwide.Latest Articles about TSMC
Technology Explained
Blackwell: Blackwell is an AI computing architecture designed to supercharge tasks like training large language models. These powerful GPUs boast features like a next-gen Transformer Engine and support for lower-precision calculations, enabling them to handle complex AI workloads significantly faster and more efficiently than before. While aimed at data centers, the innovations within Blackwell are expected to influence consumer graphics cards as well
Latest Articles about Blackwell
CoWoS: CoWoS, or Chip-on-Wafer-on-Substrate, is a recent advancement in chip packaging that allows for more powerful processors in a compact size. This technology stacks multiple chips on a silicon interposer, enabling denser connections and improved performance. Developed for high-performance computing, CoWoS promises faster processing, lower power consumption, and the ability to pack more processing power into smaller devices.
Latest Articles about CoWoS
CPU: The Central Processing Unit (CPU) is the brain of a computer, responsible for executing instructions and performing calculations. It is the most important component of a computer system, as it is responsible for controlling all other components. CPUs are used in a wide range of applications, from desktop computers to mobile devices, gaming consoles, and even supercomputers. CPUs are used to process data, execute instructions, and control the flow of information within a computer system. They are also used to control the input and output of data, as well as to store and retrieve data from memory. CPUs are essential for the functioning of any computer system, and their applications in the computer industry are vast.
Latest Articles about CPU
GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.
Latest Articles about GPU
HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.
Latest Articles about HBM3E
Trending Posts
Apple’s ambitious plan to manufacture AirPods in India takes shape
Apple’s Magic Mouse may finally undergo long-awaited enhancements
FromSoftware and Bandai Namco Unveil ELDEN RING NIGHTREIGN Gameplay Details
Acer introduces FA200 M.2 PCIe 4.0 SSD for Enhanced Storage Performance
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer