ASUS showcases its AI infrastructure solutions at GTC 2025, including the AI POD powered by NVIDIA GB300 NVL72 and a range of servers and supercomputers designed for heavy lifting and AI inference.
- ASUS is a Diamond sponsor at GTC 2025, showcasing its AI POD powered by NVIDIA GB300 NVL72 platform.
- ASUS has secured a significant number of orders, marking a major milestone in the tech landscape.
- ASUS offers a comprehensive range of AI servers and infrastructure solutions that work seamlessly with NVIDIA's AI Enterprise and Omniverse platforms.
ASUS Takes Center Stage at GTC 2025
Today, ASUS has made a splash at GTC 2025 as a Diamond sponsor, unveiling its AI POD powered by the nVidia GB300 NVL72 platform. But that’s not all—ASUS is thrilled to share that it has already secured a significant number of orders, marking a major milestone in the tech landscape. With AI innovation at its core, ASUS is also showcasing its latest lineup of AI servers from the Blackwell and HGX families.
Imagine this: the ASUS XA NB3I-E12, fueled by the NVIDIA B300 NVL16, or the powerhouse ASUS ESC NB8-E11 featuring the NVIDIA DGX B200 with 8 GPUs. There’s also the ASUS ESC N8-E11V, which boasts the NVIDIA HGX H200, and the ESC8000A-E13P/ESC8000-E12P, ready to support the NVIDIA RTX PRO 6000 Blackwell Server Edition. With such a robust focus on advancing AI adoption across various industries, ASUS is well-positioned to deliver comprehensive infrastructure solutions that work seamlessly with NVIDIA’s AI Enterprise and Omniverse platforms. This partnership is all about empowering businesses to speed up their time to market.
Unleashing the Power of AI
So, what makes the ASUS AI POD so special? By integrating the formidable capabilities of the NVIDIA GB300 NVL72 server platform, this setup offers extraordinary processing power, enabling enterprises to tackle even the most daunting AI challenges with confidence. Picture this: a server equipped with 72 NVIDIA Blackwell Ultra GPUs and 36 Grace CPUs, all designed in a rack-scale format that delivers increased AI FLOPs and up to 40 TB of high-speed memory per rack.
But wait, there’s more! The system also features networking with NVIDIA Quantum-X800 InfiniBand and Spectrum-X Ethernet, ensuring that everything runs smoothly. It’s not just about raw power; this server is built for efficiency, with a 100% liquid-cooled design that supports trillion-parameter LLM inference and training. This launch is a game-changer in AI infrastructure, offering customers a reliable and scalable solution to meet their evolving demands.
Expertise That Counts
ASUS has proven its expertise in building the NVIDIA GB200 NVL72 infrastructure from the ground up. To maximize computing efficiency, they are also showcasing the ASUS RS501A-E12-RS12U, a powerful SDS server that significantly reduces Latency in data training and inference. This complements the NVIDIA GB200 NVL72 beautifully. ASUS provides a full spectrum of services—from hardware to cloud-based applications—covering everything from architecture design and advanced cooling solutions to large-scale validation and deployment.
Kaustubh Sanghani, NVIDIA’s Vice President of GPU Products, said it best: “NVIDIA is working with ASUS to drive the next wave of innovation in data centers.” This collaboration is set to accelerate training and inference, opening up new possibilities in AI reasoning and agentic AI.
GPU Servers Designed for Heavy Lifting
At GTC 2025, ASUS is also rolling out a series of NVIDIA-certified servers tailored for heavy generative AI workloads. These versatile models are designed to handle applications and workflows built with NVIDIA’s AI Enterprise and Omniverse platforms, making data processing a breeze.
Take the ASUS 10U ESC NB8-E11, for example. It’s outfitted with the NVIDIA Blackwell HGX B200 with 8 GPUs, delivering unmatched AI performance. Then there’s the ASUS XA NB3I-E12, featuring the HGX B300 NVL16, which ramps up AI FLOPS and includes a staggering 2.3 TB of HBM3E memory. With such powerful specs, these servers are built to meet the increasingly complex needs of modern data centers.
And let’s not forget the 7U ASUS ESC N8-E11V, which is powered by eight NVIDIA H200 GPUs. This beauty supports both air and Liquid Cooling options, ensuring thermal efficiency and scalability without compromising performance.
Scalable Solutions for AI Inference
ASUS is also stepping up its game with server and edge AI options specifically designed for AI inference. The ASUS ESC8000 series, embedded with the latest NVIDIA RTX PRO 6000 Blackwell Server Edition GPUs, is a sight to behold. The ESC8000-E12P is a high-density 4U server that can accommodate eight dual-slot high-end NVIDIA H200 GPUs, ensuring seamless integration and scalability for modern data centers.
Additionally, the ASUS ESC8000A-E13P offers the same support for eight dual-slot NVIDIA H200 GPUs, making it a powerhouse for large-scale deployments.
Introducing the ASUS Ascent GX10 Supercomputer
In another exciting reveal, ASUS has launched its AI supercomputer, the ASUS Ascent GX10. This compact marvel is powered by the NVIDIA GB10 Grace Blackwell Superchip, delivering a jaw-dropping 1,000 AI TOPS performance. It’s perfect for demanding workloads and comes equipped with a Blackwell GPU, a 20-core ARM CPU, and 128 GB of memory. Imagine having the capabilities of a petaflop-scale AI supercomputer right on your desk!
ASUS IoT is also showcasing edge AI computers at GTC, highlighting advanced capabilities for AI inference at the edge. The ASUS IoT PE2100N edge AI computer, powered by the NVIDIA Jetson AGX Orin, delivers up to 275 TOPS, making it ideal for generative AI applications and natural language interactions.
Commitment to Excellence
ASUS is committed to delivering excellence through its innovative AI infrastructure solutions, addressing the rigorous demands of high-performance computing (HPC) and AI workloads. Their meticulously designed servers and storage systems enhance performance, reliability, and scalability to meet diverse enterprise needs.
With over 30 years of experience in the server industry, ASUS is ready to help clients navigate the competitive landscape of AI. By leveraging its extensive technological capabilities and industry-leading expertise, ASUS is dedicated to supporting organizations in tackling the complexities of AI infrastructure while driving innovation and sustainability.
In a world where efficiency and cost-effectiveness are paramount, choosing an ASUS server is more than just acquiring hardware; it’s about embracing a holistic approach to AI and HPC that minimizes operational costs and environmental impact. Are you ready to join the AI revolution?

About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About ARM:
ARM, originally known as Acorn RISC Machine, is a British semiconductor and software design company that specializes in creating energy-efficient microprocessors, system-on-chip (SoC) designs, and related technologies. Founded in 1990, ARM has become a important player in the global semiconductor industry and is widely recognized for its contributions to mobile computing, embedded systems, and Internet of Things (IoT) devices. ARM's microprocessor designs are based on the Reduced Instruction Set Computing (RISC) architecture, which prioritizes simplicity and efficiency in instruction execution. This approach has enabled ARM to produce highly efficient and power-saving processors that are used in a vast array of devices, ranging from smartphones and tablets to IoT devices, smart TVs, and more. The company does not manufacture its own chips but licenses its processor designs and intellectual property to a wide range of manufacturers, including Qualcomm, Apple, Samsung, and NVIDIA, who then integrate ARM's technology into their own SoCs. This licensing model has contributed to ARM's widespread adoption and influence across various industries.Latest Articles about ARM
About ASUS:
ASUS, founded in 1989 by Ted Hsu, M.T. Liao, Wayne Hsieh, and T.H. Tung, has become a multinational tech giant known for its diverse hardware products. Spanning laptops, motherboards, graphics cards, and more, ASUS has gained recognition for its innovation and commitment to high-performance computing solutions. The company has a significant presence in gaming technology, producing popular products that cater to enthusiasts and professionals alike. With a focus on delivering and reliable technology, ASUS maintains its position as a important player in the industry.Latest Articles about ASUS
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
Technology Explained
Blackwell: Blackwell is an AI computing architecture designed to supercharge tasks like training large language models. These powerful GPUs boast features like a next-gen Transformer Engine and support for lower-precision calculations, enabling them to handle complex AI workloads significantly faster and more efficiently than before. While aimed at data centers, the innovations within Blackwell are expected to influence consumer graphics cards as well
Latest Articles about Blackwell
CPU: The Central Processing Unit (CPU) is the brain of a computer, responsible for executing instructions and performing calculations. It is the most important component of a computer system, as it is responsible for controlling all other components. CPUs are used in a wide range of applications, from desktop computers to mobile devices, gaming consoles, and even supercomputers. CPUs are used to process data, execute instructions, and control the flow of information within a computer system. They are also used to control the input and output of data, as well as to store and retrieve data from memory. CPUs are essential for the functioning of any computer system, and their applications in the computer industry are vast.
Latest Articles about CPU
GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.
Latest Articles about GPU
Grace Blackwell: Grace Blackwell is a cutting-edge technology that has revolutionized the computer industry. It is a type of artificial intelligence that is designed to mimic human cognitive abilities, such as learning, problem-solving, and decision-making. This technology has been applied in various areas of the computer industry, including data analysis, natural language processing, and machine learning. For example, Grace Blackwell can analyze large amounts of data and identify patterns and trends, making it a valuable tool for businesses to make informed decisions. It can also understand and respond to human language, making it useful for virtual assistants and chatbots. Additionally, Grace Blackwell can continuously learn and improve its performance, making it an invaluable asset in the development of new technologies. Overall, Grace Blackwell has greatly enhanced the capabilities of computers and has opened up new possibilities for the future of technology.
Latest Articles about Grace Blackwell
HBM3E: HBM3E is the latest generation of high-bandwidth memory (HBM), a type of DRAM that is designed for artificial intelligence (AI) applications. HBM3E offers faster data transfer rates, higher density, and lower power consumption than previous HBM versions. HBM3E is developed by SK Hynix, a South Korean chipmaker, and is expected to enter mass production in 2024. HBM3E can achieve a speed of 1.15 TB/s and a capacity of 64 GB per stack. HBM3E is suitable for AI systems that require large amounts of data processing, such as deep learning, machine learning, and computer vision.
Latest Articles about HBM3E
HPC: HPC, or High Performance Computing, is a type of technology that allows computers to perform complex calculations and process large amounts of data at incredibly high speeds. This is achieved through the use of specialized hardware and software, such as supercomputers and parallel processing techniques. In the computer industry, HPC has a wide range of applications, from weather forecasting and scientific research to financial modeling and artificial intelligence. It enables researchers and businesses to tackle complex problems and analyze vast amounts of data in a fraction of the time it would take with traditional computing methods. HPC has revolutionized the way we approach data analysis and has opened up new possibilities for innovation and discovery in various fields.
Latest Articles about HPC
Latency: Technology latency is the time it takes for a computer system to respond to a request. It is an important factor in the performance of computer systems, as it affects the speed and efficiency of data processing. In the computer industry, latency is a major factor in the performance of computer networks, storage systems, and other computer systems. Low latency is essential for applications that require fast response times, such as online gaming, streaming media, and real-time data processing. High latency can cause delays in data processing, resulting in slow response times and poor performance. To reduce latency, computer systems use various techniques such as caching, load balancing, and parallel processing. By reducing latency, computer systems can provide faster response times and improved performance.
Latest Articles about Latency
Liquid Cooling: Liquid cooling is a technology used to cool down computer components, such as processors, graphics cards, and other components that generate a lot of heat. It works by circulating a liquid coolant, such as water or a special coolant, through a series of pipes and radiators. The liquid absorbs the heat from the components and then dissipates it into the air. This technology is becoming increasingly popular in the computer industry due to its ability to provide more efficient cooling than traditional air cooling methods. Liquid cooling can also be used to overclock components, allowing them to run at higher speeds than their rated speeds. This technology is becoming increasingly popular in the gaming industry, as it allows gamers to get the most out of their hardware.
Latest Articles about Liquid Cooling
LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.
Latest Articles about LLM
Trending Posts
ID@Xbox Fund Supports Indie Developers, Teases Hollow Knight: Silksong Release
Nvidia’s Application Receives Update with Fixes and Exciting New Features
Amplitude introduces Sheredyn Faction’s Kin in Anticipated Endless Legend 2 Release
Fractal Enhances North XL PC Cases for Better Reverse-Connector Motherboard Support
Ubisoft Explores Launch Day Dynamics for Assassin’s Creed Shadows on Steam
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
Intel introduces Impressive 15th Gen Core i7-15700K and Core i9-15900K: Release Date Imminent