Intel joins MLCommons AI Safety Working Group, advancing AI safety efforts.


October 26, 2023 by our News Team

Intel has joined the MLCommons-organized AIS working group to collaborate with industry peers in developing a platform and pool of tests for AI safety benchmarks to ensure responsible AI deployment.

  • Intel is committed to advancing AI responsibly
  • Intel is contributing its expertise and knowledge to the development of AI safety benchmarks
  • Intel is collaborating with industry peers to establish a common set of best practices and benchmarks for evaluating the safe use of LLM-based generative AI tools


Intel has made an exciting announcement today, revealing its participation in the new MLCommons AI Safety (AIS) working group. This group consists of industry experts and academics in the field of artificial intelligence, and Intel is proud to be a founding member. The aim of this collaboration is to create a flexible platform for benchmarks that measure the safety and risk factors of AI tools and models.

As AI technology continues to advance, it is crucial to have standardized benchmarks that can evaluate the safety of these powerful tools. The development of AI safety benchmarks by the AIS working group will play a vital role in ensuring responsible AI deployment in our society. Intel, being committed to advancing AI responsibly, will contribute its expertise and knowledge to this important endeavor.

Deepak Patil, Intel’s corporate vice president and general manager of Data Center AI Solutions, emphasizes the company’s holistic approach to addressing safety concerns in AI development. Intel recognizes the significance of responsible training and deployment of large language models (LLMs) and tools to mitigate potential risks. By joining the AIS working group, Intel aims to collaborate with industry peers in defining new processes, methods, and benchmarks that improve AI safety across the board.

The MLCommons-organized AIS working group consists of a diverse group of AI experts who will work together to develop a platform and pool of tests for AI safety benchmarks. Intel plans to share its own AI safety findings, best practices, and processes for responsible development. This includes practices like red-teaming and safety tests, which are integral to ensuring the safe development and deployment of generative AI tools that leverage LLMs.

The initial focus of the AIS working group will be on developing safety benchmarks specifically for LLMs. This builds upon the groundwork laid by researchers at Stanford University’s Center for Research on Foundation Models and its Holistic Evaluation of Language Models (HELM). By sharing its rigorous review processes used internally for AI model development, Intel aims to establish a common set of best practices and benchmarks for evaluating the safe use of LLM-based generative AI tools.

Intel’s participation in the AIS working group is a testament to its ongoing commitment to responsible AI advancement. By collaborating with industry peers and contributing its expertise, Intel aims to drive the development of AI safety standards and ensure that AI technologies are developed and deployed in a manner that prioritizes ethical considerations and human rights implications.

Intel joins MLCommons AI Safety Working Group, advancing AI safety efforts.

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About Intel:

Intel Corporation, a global technology leader, is for its semiconductor innovations that power computing and communication devices worldwide. As a pioneer in microprocessor technology, Intel has left an indelible mark on the evolution of computing with its processors that drive everything from PCs to data centers and beyond. With a history of advancements, Intel's relentless pursuit of innovation continues to shape the digital landscape, offering solutions that empower businesses and individuals to achieve new levels of productivity and connectivity.

Intel website  Intel LinkedIn
Latest Articles about Intel

Technology Explained


LLM: A Large Language Model (LLM) is a highly advanced artificial intelligence system, often based on complex architectures like GPT-3.5, designed to comprehend and produce human-like text on a massive scale. LLMs possess exceptional capabilities in various natural language understanding and generation tasks, including answering questions, generating creative content, and delivering context-aware responses to textual inputs. These models undergo extensive training on vast datasets to grasp the nuances of language, making them invaluable tools for applications like chatbots, content generation, and language translation.

Latest Articles about LLM




Leave a Reply