Ampere Expands AmpereOne Product Line with 256-Core Scale-up


May 16, 2024 by our News Team

Ampere Computing introduces annual update on upcoming products and milestones, highlighting commitment to innovation and efficiency through partnerships and technologies for sustainable and power-efficient computing solutions for the Cloud and AI.

  • Strategic partnership with Qualcomm Technologies to develop joint solution for AI inferencing
  • Focus on performance and efficiency, shattering misconception that low power equates to low performance
  • Commitment to sustainability and energy efficiency in data centers


Ampere Computing, the leading provider of sustainable and power-efficient computing solutions for the Cloud and AI, has launched its annual update on upcoming products and milestones. The company’s commitment to innovation and efficiency is evident in its partnership with Qualcomm Technologies, Inc., where they are collaborating on a joint solution for AI inferencing using Qualcomm’s Cloud AI 100 inference solutions and Ampere CPUs.

Renee James, CEO of Ampere Computing, emphasized the increasing power requirements and energy challenges posed by AI, which have brought the company’s focus on performance and efficiency to the forefront. Ampere’s silicon design approach has shattered the misconception that low power equates to low performance. Over the past six years, Ampere has pushed the boundaries of efficiency in computing and delivered performance that surpasses legacy CPUs while maintaining energy efficiency.

James also highlighted the pressing issue of energy consumption in the rapid advancement of AI. She stressed the importance of retrofitting existing air-cooled environments with upgraded compute capabilities and building environmentally sustainable new data centers that align with available power on the grid. Ampere is dedicated to enabling this transformation.

Jeff Wittich, Chief Product Officer of Ampere Computing, outlined the company’s vision for “AI Compute,” which encompasses a wide range of workloads, from traditional cloud native applications to AI-related tasks. This includes integrating AI with cloud native applications such as data processing, web serving, and media delivery.

Ampere’s upcoming AmpereOne platform was also launched during the update, with a 12-channel 256 core CPU ready for deployment on the N3 process node. This platform promises exceptional performance without requiring complex platform designs.

The update included several notable news highlights:

1. Collaboration with Qualcomm Technologies to develop a joint solution featuring Ampere CPUs and Qualcomm Cloud AI 100 Ultra. This solution aims to address LLM inferencing on large generative AI models.

2. Expansion of Ampere’s 12-channel platform with the upcoming 256 core AmpereOne CPU. This CPU will deliver over 40% more performance than any other CPU on the market today, without the need for specialized platform designs. The highly anticipated 192-core 12-channel memory platform is still on track for release later this year.

3. Meta’s Llama 3 is now running on Ampere CPUs at Oracle Cloud, showcasing comparable performance to an nVidia A10 GPU paired with an x86 CPU but with only a third of the power consumption.

4. Formation of a UCIe working group as part of the AI Platform Alliance, demonstrating Ampere’s commitment to open interface technology and its ability to incorporate customer IP into future CPUs.

5. New details on AmpereOne performance and OEM and ODM platforms, showcasing its industry-leading performance per watt. AmpereOne outperforms AMD Genoa by 50% and Bergamo by 15%, making it an ideal choice for data centers looking to refresh and consolidate infrastructure while maximizing performance per rack.

6. Announcement of the imminent shipping of new AmpereOne OEM and ODM platforms within the next few months.

7. Collaboration with NETINT to develop a joint solution utilizing Quadra T1U video processing chips and Ampere CPUs. This solution enables simultaneous transcoding of 360 live channels and real-time subtitling for 40 streams across multiple languages using OpenAI’s Whisper model.

In addition to existing features like Memory Tagging, QOS Enforcement, and Mesh Congestion Management, Ampere revealed a new FlexSKU feature. This feature allows customers to use the same SKU for both scale-out and scale-up use cases, providing greater flexibility and ease of use.

Ampere Computing’s annual update showcases its commitment to delivering sustainable, power-efficient computing solutions for the Cloud and AI. Through strategic partnerships, technologies, and a focus on performance and efficiency, Ampere is poised to shape the future of computing.

Ampere Expands AmpereOne Product Line with 256-Core Scale-up

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About AMD: AMD, a large player in the semiconductor industry is known for its powerful processors and graphic solutions, AMD has consistently pushed the boundaries of performance, efficiency, and user experience. With a customer-centric approach, the company has cultivated a reputation for delivering high-performance solutions that cater to the needs of gamers, professionals, and general users. AMD's Ryzen series of processors have redefined the landscape of desktop and laptop computing, offering impressive multi-core performance and competitive pricing that has challenged the dominance of its competitors. Complementing its processor expertise, AMD's Radeon graphics cards have also earned accolades for their efficiency and exceptional graphical capabilities, making them a favored choice among gamers and content creators. The company's commitment to innovation and technology continues to shape the client computing landscape, providing users with powerful tools to fuel their digital endeavors.

AMD website  AMD LinkedIn

About nVidia: NVIDIA has firmly established itself as a leader in the realm of client computing, continuously pushing the boundaries of innovation in graphics and AI technologies. With a deep commitment to enhancing user experiences, NVIDIA's client computing business focuses on delivering solutions that power everything from gaming and creative workloads to enterprise applications. for its GeForce graphics cards, the company has redefined high-performance gaming, setting industry standards for realistic visuals, fluid frame rates, and immersive experiences. Complementing its gaming expertise, NVIDIA's Quadro and NVIDIA RTX graphics cards cater to professionals in design, content creation, and scientific fields, enabling real-time ray tracing and AI-driven workflows that elevate productivity and creativity to unprecedented heights. By seamlessly integrating graphics, AI, and software, NVIDIA continues to shape the landscape of client computing, fostering innovation and immersive interactions in a rapidly evolving digital world.

nVidia website  nVidia LinkedIn

About Oracle: Oracle Corporation is a important American multinational technology company founded in 1977 and headquartered in Redwood City, California. It's one of the world's largest software and cloud computing companies, known for its enterprise software products and services. Oracle specializes in developing and providing database management systems, cloud solutions, software applications, and hardware infrastructure. Their flagship product, the Oracle Database, is widely used in businesses and organizations worldwide. Oracle also offers a range of cloud services, including Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS).

Oracle website  Oracle LinkedIn

About Qualcomm: Qualcomm, a leading American global semiconductor and telecommunications equipment company, has played a pivotal role in shaping the modern technology landscape. Founded in 1985, Qualcomm has been at the forefront of innovation, particularly in the realm of mobile communications and wireless technology. The company's advancements have been instrumental in the evolution of smartphones, powering devices with their Snapdragon processors that deliver exceptional performance and efficiency. With a strong commitment to research and development, Qualcomm has enabled the growth of 5G technology, paving the way for faster and more connected experiences.

Qualcomm website  Qualcomm LinkedIn

Technology Explained


CPU: The Central Processing Unit (CPU) is the brain of a computer, responsible for executing instructions and performing calculations. It is the most important component of a computer system, as it is responsible for controlling all other components. CPUs are used in a wide range of applications, from desktop computers to mobile devices, gaming consoles, and even supercomputers. CPUs are used to process data, execute instructions, and control the flow of information within a computer system. They are also used to control the input and output of data, as well as to store and retrieve data from memory. CPUs are essential for the functioning of any computer system, and their applications in the computer industry are vast.


GPU: GPU stands for Graphics Processing Unit and is a specialized type of processor designed to handle graphics-intensive tasks. It is used in the computer industry to render images, videos, and 3D graphics. GPUs are used in gaming consoles, PCs, and mobile devices to provide a smooth and immersive gaming experience. They are also used in the medical field to create 3D models of organs and tissues, and in the automotive industry to create virtual prototypes of cars. GPUs are also used in the field of artificial intelligence to process large amounts of data and create complex models. GPUs are becoming increasingly important in the computer industry as they are able to process large amounts of data quickly and efficiently.


LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.





Leave a Reply