AMD’s Pollara 400 AI NIC Begins Shipping to Customers


April 10, 2025 by our News Team

AMD's new Pensando Pollara 400 AI NIC offers a flexible and high-performing solution for overcoming network bottlenecks and accelerating AI workloads in an open ecosystem.

  • Scalable solutions that fit seamlessly into an open ecosystem
  • High-performing network with intelligent load balancing, congestion management, and rapid failover and loss recovery
  • Vendor-agnostic compatibility for building an AI infrastructure that can meet current and future demands


Building the Future of AI Infrastructure

When it comes to training and deploying generative AI and large language models, having the right infrastructure is everything. It’s not just about raw power; it’s about creating a parallel computing environment that can handle the ever-growing demands of AI and machine learning workloads. Think of it as setting the stage for a grand performance—everything needs to work in harmony. One crucial element? The ability to scale out the intra-node GPU-GPU communication network within data centers.

At AMD, we’re all about giving our customers choices. We’re committed to providing scalable solutions that fit seamlessly into an open ecosystem. This approach not only helps reduce the total cost of ownership but also ensures that you don’t have to compromise on performance. Last October, we teased the launch of our new AMD Pensando Pollara 400 AI NIC, and today, we’re thrilled to announce that this technology is officially available for purchase. So, what makes the Pensando Pollara 400 AI NIC a game-changer for accelerating AI workloads?

Overcoming Network Bottlenecks

As cloud service providers, hyperscalers, and enterprises push the limits of their AI clusters, they often hit a wall: network bottlenecks. Many organizations report that their GPU utilization suffers due to inadequate networking capabilities. After all, data transfer speeds are only as good as the network infrastructure that supports them. With AI workloads skyrocketing, it’s critical to make the most of both networking and compute resources.

What does a high-performing network look like? It excels in three key areas: intelligent load balancing, congestion management, and rapid failover and loss recovery. A network that delivers on these fronts ensures increased uptime, faster job completion times, and overall reliability—all essential for scaling AI operations effectively.

Programmability for a Flexible Future

One of the standout features of the Pensando Pollara 400 AI NIC is its hardware programmability, powered by our P4 architecture. This means customers can customize their networking capabilities, adding new features as they emerge without waiting for the next generation of hardware. Whether it’s adopting new standards from the Ultra Ethernet Consortium (UEC) or developing custom transport protocols, this NIC is built for flexibility.

Let’s break down some of the innovative features that make this possible:

Transport Protocol of Choice
: Choose from RoCEv2, UEC RDMA, or any Ethernet protocol that suits your needs.
Intelligent Packet Spray
: This feature enhances network bandwidth utilization with advanced adaptive packet spraying, crucial for managing the high bandwidth and low Latency that large AI models demand.
Out-of-Order Packet Handling
: Designed to tackle the common headaches of multipathing and packet spraying, this feature intelligently manages packet arrivals to minimize errors and boost efficiency during AI training and inference.
Selective Retransmission
: Why resend everything when you can just focus on the lost or corrupted packets? This feature ensures that only the necessary data is resent, improving overall network performance.
Path-Aware Congestion Control
: Automatically avoid congested paths and maintain near wire-rate performance, even during temporary congestion.
Rapid Fault Detection
: Time is of the essence in AI workloads. Our NIC can detect issues in milliseconds, enabling near-instantaneous failover and minimizing GPU idle time.

The Open Ecosystem Advantage

One of the biggest perks of the Pensando Pollara 400 AI NIC is its vendor-agnostic compatibility. This means organizations can build an AI infrastructure that not only meets their current needs but also scales easily for future demands. With this open ecosystem approach, you can reduce capital expenditures without sacrificing performance or relying on costly, large-buffer switching fabrics.

Proven Performance in Major Data Centers

Last but certainly not least, the Pensando Pollara 400 AI NIC is already stirring up some of the largest scale-out data centers across the globe. Our first customer shipments have been tested by leading Cloud Service Providers, who chose this NIC for its unique programmability, high bandwidth, low latency performance, and rich feature set. It’s clear that the future of AI infrastructure is not just about keeping up; it’s about leading the charge.

So, whether you’re a cloud service provider or an enterprise looking to ramp up your AI capabilities, the Pensando Pollara 400 AI NIC is here to help you break through barriers and redefine what’s possible. Are you ready to take your AI infrastructure to the next level?

AMD’s Pollara 400 AI NIC Begins Shipping to Customers

AMD’s Pollara 400 AI NIC Begins Shipping to Customers

AMD’s Pollara 400 AI NIC Begins Shipping to Customers

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About AMD:

AMD, a large player in the semiconductor industry is known for its powerful processors and graphic solutions, AMD has consistently pushed the boundaries of performance, efficiency, and user experience. With a customer-centric approach, the company has cultivated a reputation for delivering high-performance solutions that cater to the needs of gamers, professionals, and general users. AMD's Ryzen series of processors have redefined the landscape of desktop and laptop computing, offering impressive multi-core performance and competitive pricing that has challenged the dominance of its competitors. Complementing its processor expertise, AMD's Radeon graphics cards have also earned accolades for their efficiency and exceptional graphical capabilities, making them a favored choice among gamers and content creators. The company's commitment to innovation and technology continues to shape the client computing landscape, providing users with powerful tools to fuel their digital endeavors.

AMD website  AMD LinkedIn
Latest Articles about AMD

Technology Explained


GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.

Latest Articles about GPU

Latency: Technology latency is the time it takes for a computer system to respond to a request. It is an important factor in the performance of computer systems, as it affects the speed and efficiency of data processing. In the computer industry, latency is a major factor in the performance of computer networks, storage systems, and other computer systems. Low latency is essential for applications that require fast response times, such as online gaming, streaming media, and real-time data processing. High latency can cause delays in data processing, resulting in slow response times and poor performance. To reduce latency, computer systems use various techniques such as caching, load balancing, and parallel processing. By reducing latency, computer systems can provide faster response times and improved performance.

Latest Articles about Latency




Leave a Reply