Micron’s HBM3E Supply for 2024 Completely Sold Out, Dominates Majority of 2025


March 22, 2024 by our News Team

Micron has sold out its supply of HBM3E memory for 2024 and most of 2025, solidifying its position as a key supplier for NVIDIA's H200/GH200 accelerators.

  • Micron has completely sold out its supply of HBM3E memory for 2024, indicating high demand for their product.
  • The majority of Micron's 2025 production has already been allocated, showing continued demand for their HBM3E memory in the future.
  • Micron's HBM3E memory has impressive specifications, making it a key supplier for NVIDIA's H200/GH200 accelerators.


Micron, the leading memory manufacturer, has announced that it has completely sold out its supply of HBM3E memory for 2024, with most of its 2025 production already allocated. Micron’s HBM3E memory, also known as HBM3 Gen2, was one of the first to be qualified for nVidia’s updated H200/GH200 accelerators, solidifying its position as a key supplier to the GPU manufacturer.

Sanjay Mehrotra, Micron’s CEO, stated, “Our HBM is sold out for calendar 2024, and the overwhelming majority of our 2025 supply has already been allocated. We continue to expect HBM bit share equivalent to our overall DRAM bit share sometime in calendar 2025.”

Micron’s initial HBM3E product boasts impressive specifications, including an 8-Hi 24 GB stack with a 1024-bit interface, 9.2 GT/s data transfer rate, and a total bandwidth of 1.2 TB/s. NVIDIA’s H200 accelerator will utilize six of these stacks, providing a substantial 141 GB of accessible high-bandwidth memory.

Mehrotra further added, “We are on track to generate several hundred million dollars of revenue from HBM in fiscal 2024 and expect HBM revenues to be accretive to our DRAM and overall gross margins starting in the fiscal third quarter.”

In addition to their current offering, Micron has begun sampling its 12-Hi 36 GB stacks, which offer a 50% increase in capacity. These stacks will be ramped up in 2025 and cater to the next generation of AI products. However, it seems that NVIDIA’s B100 and B200 will not initially adopt the 36 GB HBM3E stacks.

The demand for artificial intelligence servers reached record levels last year and is expected to remain high in 2025. While NVIDIA faces increased competition from other players in the AI processor market, their H200 processor is still expected to be the preferred choice for AI training, particularly for major companies like Meta and Microsoft. This makes Micron’s position as the primary supplier of HBM3E for NVIDIA’s H200 a significant achievement, allowing them to capture a substantial portion of the HBM market, which is currently dominated by SK hynix and Samsung.

However, the larger physical size of HBM stacks compared to regular DDR4 or DDR5 ICs will impact the supply of commodity DRAMs from Micron. Mehrotra explained, “The ramp of HBM production will constrain supply growth in non-HBM products. Industrywide, HBM3E consumes approximately three times the wafer supply as DDR5 to produce a given number of bits in the same technology node.”

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About Microsoft: Microsoft, founded by Bill Gates and Paul Allen in 1975 in Redmond, Washington, USA, is a technology giant known for its wide range of software products, including the Windows operating system, Office productivity suite, and cloud services like Azure. Microsoft also manufactures hardware, such as the Surface line of laptops and tablets, Xbox gaming consoles, and accessories.

Microsoft website  Microsoft LinkedIn

About nVidia: NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.

nVidia website  nVidia LinkedIn

About Samsung: Samsung, a South Korean multinational conglomerate, has established itself as a global leader in various industries, including electronics, technology, and more. Founded in 1938, Samsung's influence spans from smartphones and consumer electronics to semiconductors and home appliances. With a commitment to innovation, Samsung has contributed products like the Galaxy series of smartphones, QLED TVs, and SSDs that have revolutionized the way we live and work.

Samsung website  Samsung LinkedIn

About SK hynix: SK Hynix is a important South Korean semiconductor company known for its innovative contributions to the global technology landscape. Specializing in the production of memory solutions, SK Hynix has played a vital role in shaping the semiconductor industry. With a commitment to research and development, they have continuously pushed the boundaries of memory technology, resulting in products that power various devices and applications.

SK hynix website  SK hynix LinkedIn

Technology Explained


DDR4: DDR4 is a generation of Double Data Rate (DDR) dynamic random access memory (RAM) technology. It is a type of RAM that utilizes a higher clock frequency and is more power-efficient than its predecessors. As a result, it is capable of processing data more quickly than other RAM in the computer industry. Its increased speed and power efficiency are beneficial for applications such as gaming, rendering, and machine learning. It is designed for high-performance computing and enables faster access to stored information, resulting in better overall performance for the user. Furthermore, because of its low voltage requirements it requires less power consumption, making it an attractive option for many computer systems. DDR4 is set to become the primary RAM in most computer systems as the industry transitions away from its predecessors.


DDR5: DDR5 (Double Data Rate 5) is the next generation of memory technology for the computer industry. It is a modern day improvement on earlier DDR technologies, with faster speeds, greater bandwidth and higher capacities. DDR5 enables higher resolution, seamless gaming experiences and faster data transfer rates, making it an ideal choice for high-performance computing and 4K gaming. With its greater RAM compatibility, DDR5 provides faster buffering times and raised clock speeds, giving users an improved overall work system. DDR5 is also optimized for multi-tasking, allowing users to multitask without experiencing a significant drop in performance, increasing the productivity of digital tasks. As an ever-evolving technology, DDR5 is paving the way for the computer industry into a new and powerful era.


GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.


HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.





Leave a Reply