Intel's Gaudi 2 AI accelerators, combined with community-based software and industry-standard Ethernet networking, have simplified the development of generative AI and enabled flexible scalability, as demonstrated by their impressive results in the MLPerf Training v4.0 benchmark, with plans for even greater performance and accessibility with the upcoming release of the Gaudi 3 accelerator.
1. Intel's Gaudi 2 AI accelerators showcased impressive performance and scalability in the MLPerf Training v4.0 benchmark. 2. The Gaudi 2 system trained in Intel Tiber Developer Cloud, featuring 1,024 accelerators, demonstrated the power and cloud capacity of Intel's AI products. 3. Intel's Gaudi 2 and upcoming Gaudi 3 accelerators offer cost-efficient and accessible solutions for enterprises looking to adopt generative AI technology.
In a recent industry AI performance benchmark, MLCommons launched the results of MLPerf Training v4.0. Among the standout performers was Intel, showcasing the power and versatility of its Gaudi 2 AI accelerators. The combination of community-based software and industry-standard Ethernet networking has simplified the development of generative AI (GenAI) and enabled flexible scalability of AI systems.
For the first time, Intel submitted results on a large Gaudi 2 system trained in Intel Tiber Developer Cloud, featuring a whopping 1,024 Gaudi 2 accelerators. This demonstration aimed to highlight the performance, scalability, and cloud capacity of Intel’s Gaudi 2 system for training MLPerf’s GPT-3 175B1 parameter benchmark model.
Zane Ball, Intel’s corporate vice president and general manager of DCAI Product Management, emphasized the significance of Intel Gaudi in addressing the gaps in today’s generative AI enterprise products. He stated, “The latest MLPerf results published by MLCommons illustrate the unique value Intel Gaudi brings to market as enterprises and customers seek more cost-efficient, scalable systems with standard networking and open software, making GenAI more accessible to more customers.”
The demand for GenAI is on the rise, but many businesses face challenges related to cost, scale, and development requirements. Last year, only 10% of enterprises successfully moved GenAI projects into production. Intel’s AI products aim to tackle these challenges head-on. The Intel Gaudi 2 accelerator presents an accessible and scalable solution that has already proven its ability to train large language models (LLMs) ranging from 70 billion to 175 billion parameters. And there’s more to come with the upcoming release of the Intel Gaudi 3 accelerator, which promises even greater performance, openness, and choice for enterprise GenAI.
The MLPerf results confirm that Gaudi 2 remains the only MLPerf-benchmarked alternative to nVidia’s H100 for AI compute. Intel’s GPT-3 results, achieved on the Tiber Developer Cloud, showcased a time-to-train (TTT) of 66.9 minutes on a system equipped with 1,024 Gaudi accelerators. This impressive result demonstrates Gaudi 2’s scaling performance when dealing with ultra-large LLMs within a developer cloud environment.
The benchmark suite also introduced a new measurement: fine-tuning the Llama 2 70B parameter model using low-rank adapters (LoRa). Fine-tuning LLMs is a common task for many customers and AI practitioners, making it a relevant benchmark for everyday applications. Intel’s submission achieved a time-to-train of 78.1 minutes on eight Gaudi 2 accelerators. To optimize memory efficiency and scaling during large model training, Intel leveraged open-source software from Optimum Habana, including Zero-3 from DeepSpeed and Flash-Attention-2 to accelerate attention mechanisms. The benchmark task force, led by engineering teams from Intel’s Habana Labs and Hugging Face, played a crucial role in developing the reference code and benchmark rules.
Intel Gaudi offers customers significant value in the field of AI. Historically, high costs have deterred many enterprises from entering the market. However, Gaudi is starting to change that narrative. At Computex, Intel announced that a standard AI kit comprising eight Intel Gaudi 2 accelerators with a universal baseboard (UBB) would be available to system providers for $65,000. This price is estimated to be one-third the cost of comparable competitive platforms. Furthermore, Intel plans to release a kit containing eight Intel Gaudi 3 accelerators with a UBB for $125,000, which is roughly two-thirds the cost of comparable alternatives.
The momentum behind Intel Gaudi is growing, as more and more customers recognize the value it brings in terms of price-performance advantages and accessibility. For example, Naver, a South Korean cloud service provider and leading search engine, is leveraging Gaudi to build a new AI ecosystem that lowers barriers to wide-scale LLM adoption, reducing development costs and project timelines for its customers. Similarly, AI Sweden, an alliance between the Swedish government and private business, utilizes Gaudi for fine-tuning with domain-specific municipal content, improving operational efficiencies and enhancing public services.
To support customers in accessing Gaudi, Intel offers the Tiber Developer Cloud, a unique, managed, and cost-efficient platform for developing and deploying AI models, applications, and solutions. This platform provides increased access to Gaudi for various AI compute needs. Seekr, an Intel customer, recently launched SeekrFlow, an AI development platform for trusted AI, using Intel’s developer cloud. Seekr reported cost savings ranging from 40% up to 400% compared to on-premise systems with another vendor’s GPUs and another cloud service provider. SeekrFlow also demonstrated 20% faster AI training and 50% faster AI inference than on-premise alternatives.
Looking ahead, Intel plans to submit MLPerf results based on the Intel Gaudi 3 AI accelerator in the upcoming inference benchmark. The Gaudi 3 accelerators are expected to deliver a significant performance leap for AI training and inference on popular LLMs and multimodal models. These accelerators will be available from original equipment manufacturers in the fall of 2024. With Intel’s ongoing commitment to advancing AI technology, the future looks promising for both enterprises and customers seeking innovative and accessible AI solutions.
About Our Team
Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.
Background Information
About Intel:
Intel Corporation, a global technology leader, is for its semiconductor innovations that power computing and communication devices worldwide. As a pioneer in microprocessor technology, Intel has left an indelible mark on the evolution of computing with its processors that drive everything from PCs to data centers and beyond. With a history of advancements, Intel's relentless pursuit of innovation continues to shape the digital landscape, offering solutions that empower businesses and individuals to achieve new levels of productivity and connectivity.Latest Articles about Intel
About nVidia:
NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.Latest Articles about nVidia
Event Info
About Computex:
Computex, held annually in Taipei, Taiwan, stands as one of the world's leading technology trade shows, showcasing cutting-edge innovations in computing hardware, software, and emerging technologies. With a focus on industry trends and product launches, it serves as a pivotal platform for tech giants and startups alike to unveil their latest advancements and forge key partnerships, attracting a global audience of industry professionals, enthusiasts, and media representatives.Latest Articles about Computex
Technology Explained
LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.
Latest Articles about LLM
Trending Posts
Renesas Launches First Comprehensive Chipset for Next-Gen DDR5 Server MRDIMMs
CHIEFTEC introduces Visio and Visio AIR: Dual-Chamber ATX PC Cases Redefined
NVIDIA DLSS 3 Expands Its Reach to Additional Games This Week
Microsoft to discontinue Chrome’s Autofill extension: A major blow to user convenience.
TRYX introduces LUCA L70 E-ATX Case for European Market
Evergreen Posts
NZXT about to launch the H6 Flow RGB, a HYTE Y60’ish Mid tower case
Intel’s CPU Roadmap: 15th Gen Arrow Lake Arriving Q4 2024, Panther Lake and Nova Lake Follow
HYTE teases the “HYTE Y70 Touch” case with large touch screen
NVIDIA’s Data-Center Roadmap Reveals GB200 and GX200 GPUs for 2024-2025
S.T.A.L.K.E.R. 2: Heart of Chornobyl Pushed to November 20, introduces Fresh Trailer