Samsung introduces 12-Hi 36GB HBM3E Memory Stacks, Delivering Lightning-Fast 10 GT/s Speed


February 27, 2024 by our News Team

Samsung announces completion of 12-Hi 36 GB HBM3E memory stacks, offering increased capacity and bandwidth compared to predecessors.

  • Increased peak bandwidth and capacity compared to previous HBM3E memory products
  • Utilizes advanced technologies such as EUV lithography and thermal compression non-conductive film
  • Potential for improved performance in AI training and inference services


Samsung has announced the completion of its 12-Hi 36 GB HBM3E memory stacks, just hours after Micron revealed its 8-Hi 24 GB HBM3E memory products. Known as Shinebolt, these new memory packages offer increased peak bandwidth and capacity compared to their predecessors, Icebolt, by over 50%, making them the fastest memory devices in the world.

Samsung’s Shinebolt 12-Hi 36 GB HBM3E stacks consist of 12 24Gb memory devices on top of a logic die with a 1024-bit interface. These new memory modules boast a data transfer rate of 10 GT/s, providing a peak bandwidth of 1.28 GB/s per stack, which is the highest per-module memory bandwidth in the industry.

However, it’s important to note that developers of HBM-supporting processors tend to be cautious. They may use Samsung’s HBM3E at lower data transfer rates due to power consumption concerns and to ensure stability for artificial intelligence (AI) and high-performance computing (HPC) applications.

To create the Shinebolt 12-Hi 36 GB HBM3E memory stacks, Samsung utilized advanced technologies. The memory products are based on memory devices manufactured using Samsung’s 4th generation 10nm-class (14nm) fabrication technology, which employs extreme ultraviolet (EUV) lithography.

Additionally, Samsung employed its advanced thermal compression non-conductive film (TC NCF) to ensure that the 12-Hi HBM3E stacks have the same z-height as the 8-Hi HBM3 products. This innovative film allowed Samsung to achieve the industry’s smallest gap between memory devices at seven micrometers (7 µm). By reducing the gaps between DRAMs, Samsung increases vertical density and reduces chip die warping. The company also implemented bumps of various sizes between the DRAM ICs, with smaller bumps for signaling and larger ones for heat dissipation, improving thermal management.

Samsung estimates that its 12-Hi HBM3E 36 GB modules can increase the average speed for AI training by 34% and accommodate over 11.5 times more simultaneous users of inference services. However, specific details regarding the size of the LLM have not been disclosed.

Samsung has already begun providing samples of the HBM3E 12H to customers, and mass production is set to commence in the first half of this year.

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About Samsung: Samsung, a South Korean multinational conglomerate, has established itself as a global leader in various industries, including electronics, technology, and more. Founded in 1938, Samsung's influence spans from smartphones and consumer electronics to semiconductors and home appliances. With a commitment to innovation, Samsung has contributed products like the Galaxy series of smartphones, QLED TVs, and SSDs that have revolutionized the way we live and work.

Samsung website  Samsung LinkedIn

Technology Explained


EUV: Extreme Ultraviolet Lithography (EUV or EUVL) is an advanced semiconductor manufacturing technique that employs extremely short wavelengths of light in the extreme ultraviolet spectrum to create intricate patterns on silicon wafers. Utilizing a wavelength around 13.5 nanometers, significantly shorter than traditional lithography methods, EUVL enables the production of smaller and more densely packed integrated circuits, enhancing the performance and efficiency of modern microprocessors and memory chips.


HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.


HPC: HPC, or High Performance Computing, is a type of technology that allows computers to perform complex calculations and process large amounts of data at incredibly high speeds. This is achieved through the use of specialized hardware and software, such as supercomputers and parallel processing techniques. In the computer industry, HPC has a wide range of applications, from weather forecasting and scientific research to financial modeling and artificial intelligence. It enables researchers and businesses to tackle complex problems and analyze vast amounts of data in a fraction of the time it would take with traditional computing methods. HPC has revolutionized the way we approach data analysis and has opened up new possibilities for innovation and discovery in various fields.


LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.





Leave a Reply