ASUS Showcases AI and Immersion-Cooling Innovations at SC23


November 13, 2023 by our News Team

At Supercomputing 2023 (SC23), ASUS will showcase their cutting-edge AI solutions and groundbreaking sustainability breakthroughs, including a powerful NVIDIA-qualified ESC N8A-E12 HGX H100 eight-GPU server and an Arm-based 2U4N server, to revolutionize the world of supercomputing.

  • ASUS plans to showcase its latest AI solutions that are set to revolutionize the world of supercomputing and foster innovation.
  • ASUS plans to offer an update to their H100-based system in 2024 featuring an H200-based drop-in replacement, providing faster and larger memory capacity with HBM3E technology.
  • ASUS provides a comprehensive in-house AI software stack, offering a no-code AI platform that facilitates accelerated AI development.


ASUS, the renowned tech company, has just announced its participation in Supercomputing 2023 (SC23), a groundbreaking event taking place in Denver, Colorado from November 12th to 17th, 2023. At this prestigious gathering, ASUS plans to showcase its latest AI solutions that are set to revolutionize the world of supercomputing and foster innovation.

One of the highlights of ASUS’s exhibit at SC23 will be the demonstration of their cutting-edge generative-AI solutions and groundbreaking sustainability breakthroughs in collaboration with Intel. Visitors can expect to witness the unveiling of the latest hybrid immersion-cooling solutions, among other exciting developments, at booth number 257.

ASUS aims to impress the attendees with its powerful nVidia-qualified ESC N8A-E12 HGX H100 eight-GPU server. This server boasts dual-socket AMD EPYC 9004 processors and is specifically designed for enterprise-level generative AI applications. With its market-leading integrated capabilities, it promises to deliver exceptional performance and efficiency.

In line with NVIDIA’s recent announcement regarding the groundbreaking NVIDIA H200 Tensor Core GPU, ASUS plans to offer an update to their H100-based system in 2024. This update will feature an H200-based drop-in replacement, providing faster and larger memory capacity with HBM3E technology. Such advancements will significantly enhance the acceleration of generative AI and large language models.

Another impressive showcase at SC23 will be ASUS’s ARM-based 2U4N server, the RS720QN-E11. This server is built around the powerful NVIDIA Grace CPU Superchip and utilizes NVIDIA NVLink-C2C technology. With its power-efficient dense infrastructure and D2C cooling solutions, it offers a compelling option for those seeking high-performance computing capabilities.

ASUS is also set to launch a server powered by the latest NVIDIA GH200 Grace Hopper Superchip. This server aims to empower scientists and researchers to tackle the most complex AI and high-performance computing applications, capable of handling terabytes of data.

Beyond the hardware exhibition, ASUS will host a series of session talks featuring speakers from various leading industries and domains. These sessions will provide valuable insights into the latest trends and advancements in the AI and supercomputing fields. Additionally, attendees can expect engaging demos, related content, and more.

As an expert in the AI-supercomputing domain, ASUS offers optimized server designs and rack integration to meet the demanding requirements of AI and HPC workloads. Moreover, ASUS provides a comprehensive in-house AI software stack, offering a no-code AI platform that facilitates accelerated AI development. This platform streamlines LLM pre-training, fine-tuning, and inference processes, minimizing risks and enabling faster progress without the need to start from scratch.

Furthermore, ASUS brings valuable experience as a supercomputer operator in Taiwan. With expertise in both operations and business support systems (OSS and BSS), ASUS collaborates with customers to develop data center infrastructure strategies that optimize operating expenses (OpEx).

ASUS’s participation in Supercomputing 2023 promises to be an exciting and enlightening experience for attendees. With their latest AI solutions and commitment to innovation, ASUS is poised to push the boundaries of supercomputing and drive technological advancements in the industry.

ASUS Showcases AI and Immersion-Cooling Innovations at SC23

ASUS Showcases AI and Immersion-Cooling Innovations at SC23

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About AMD: AMD, a large player in the semiconductor industry is known for its powerful processors and graphic solutions, AMD has consistently pushed the boundaries of performance, efficiency, and user experience. With a customer-centric approach, the company has cultivated a reputation for delivering high-performance solutions that cater to the needs of gamers, professionals, and general users. AMD's Ryzen series of processors have redefined the landscape of desktop and laptop computing, offering impressive multi-core performance and competitive pricing that has challenged the dominance of its competitors. Complementing its processor prowess, AMD's Radeon graphics cards have also earned accolades for their efficiency and exceptional graphical capabilities, making them a favored choice among gamers and content creators. The company's commitment to innovation and cutting-edge technology continues to shape the client computing landscape, providing users with powerful tools to fuel their digital endeavors.

AMD website  AMD LinkedIn

About ARM: ARM, originally known as Acorn RISC Machine, is a British semiconductor and software design company that specializes in creating energy-efficient microprocessors, system-on-chip (SoC) designs, and related technologies. Founded in 1990, ARM has become a prominent player in the global semiconductor industry and is widely recognized for its contributions to mobile computing, embedded systems, and Internet of Things (IoT) devices. ARM's microprocessor designs are based on the Reduced Instruction Set Computing (RISC) architecture, which prioritizes simplicity and efficiency in instruction execution. This approach has enabled ARM to produce highly efficient and power-saving processors that are used in a vast array of devices, ranging from smartphones and tablets to IoT devices, smart TVs, and more. The company does not manufacture its own chips but licenses its processor designs and intellectual property to a wide range of manufacturers, including Qualcomm, Apple, Samsung, and NVIDIA, who then integrate ARM's technology into their own SoCs. This licensing model has contributed to ARM's widespread adoption and influence across various industries.

ARM website  ARM LinkedIn

About ASUS: ASUS, founded in 1989 by Ted Hsu, M.T. Liao, Wayne Hsieh, and T.H. Tung, has become a multinational tech giant known for its diverse hardware products. Spanning laptops, motherboards, graphics cards, and more, ASUS has gained recognition for its innovation and commitment to high-performance computing solutions. The company has a significant presence in gaming technology, producing popular products that cater to enthusiasts and professionals alike. With a focus on delivering cutting-edge and reliable technology, ASUS maintains its position as a prominent player in the industry.

ASUS website  ASUS LinkedIn

About Intel: Intel Corporation, a global technology leader, is renowned for its semiconductor innovations that power computing and communication devices worldwide. As a pioneer in microprocessor technology, Intel has left an indelible mark on the evolution of computing with its processors that drive everything from PCs to data centers and beyond. With a history of groundbreaking advancements, Intel's relentless pursuit of innovation continues to shape the digital landscape, offering solutions that empower businesses and individuals to achieve new levels of productivity and connectivity.

Intel website  Intel LinkedIn

About nVidia: NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering cutting-edge solutions that power everything from gaming and creative workloads to enterprise applications. Renowned for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming prowess, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.

nVidia website  nVidia LinkedIn

Technology Explained


CPU: The Central Processing Unit (CPU) is the brain of a computer, responsible for executing instructions and performing calculations. It is the most important component of a computer system, as it is responsible for controlling all other components. CPUs are used in a wide range of applications, from desktop computers to mobile devices, gaming consoles, and even supercomputers. CPUs are used to process data, execute instructions, and control the flow of information within a computer system. They are also used to control the input and output of data, as well as to store and retrieve data from memory. CPUs are essential for the functioning of any computer system, and their applications in the computer industry are vast.


EPYC: EPYC is a technology designed by computer chip manufacturer AMD for use in the server and data center industry. It was introduced in June 2017 and features an innovative design to improve performance and power efficiency. EPYC processor technology is based on an innovative 14nm processor architecture, allowing up to 32 high-performance cores in a single socket. This allows for more efficient processing power, increased memory bandwidth, and greater compute density. EPYC is now widely used in the data center and cloud computing industry and provides benefits such as greater scalability, increased resource efficiency, and advanced virtualization capabilities. Additionally, EPYC technology is used in data intensive servers like server farms, gaming, and virtualization platforms. EPYC ensures that even with large deployments in multi-processor environments, power consumption and performance levels are optimized to ensure maximum efficiency.


GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.


HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.


LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.





Leave a Reply