Google changes principles on use of AI for military purposes: Wired meets The Verge


February 5, 2025 by our News Team

Google updates its principles for the use of AI, emphasizing responsible development and compliance with international laws and human rights, in light of its expanding AI ambitions and past controversies.

  • Google has updated its principles for the use of AI, showing a commitment to responsible development.
  • The updated principles emphasize human evaluations and due diligence to ensure compliance with international law and human rights.
  • Google's revisions serve as a reminder for companies to prioritize ethical considerations in AI development and usage.


Google has made a significant update to its principles for the use of AI, marking one of the biggest changes since their initial publication in 2018. These principles serve as a guide for how the company intends to develop its AI technologies. While the document has undergone several revisions in the past, the most recent update caught the attention of The Washington Post.

Previously, Google’s principles explicitly stated that the company would not develop AI technology for use in weapons or surveillance. However, the updated version now emphasizes “responsible use and development,” highlighting the company’s commitment to implementing human evaluations and due diligence to ensure compliance with international law and human rights.

At first glance, this change may seem minor, but it carries significant weight in the overall context of the company’s principles. The previous versions explicitly prohibited the use of Google’s AI technology in weapons, international conflicts, and the creation of surveillance tools that could be exploited by authoritarian regimes or for abusive purposes.

It’s worth noting that these updates may be linked to Google’s expanding AI ambitions, as the company seeks to apply its technology across various markets. The principles were initially established in 2018 following Google’s involvement in the “Project Maven” scandal. The company had plans to collaborate with the US Department of Defense, using AI technology to analyze drone footage, much of which was captured in war zones.

By revising its principles, Google aims to demonstrate its commitment to responsible AI development and usage. The company acknowledges the importance of upholding international laws and human rights, ensuring that its AI technologies are deployed ethically.

This update serves as a reminder that even tech giants like Google are constantly reevaluating and refining their approaches to AI. As the field continues to evolve, it is crucial for companies to prioritize ethical considerations and remain accountable for the impact of their technologies.

So, what does this mean for the future of AI at Google? Will other tech companies follow suit and revise their own AI principles? Only time will tell. But for now, it’s encouraging to see Google taking steps towards responsible AI development and setting a precedent for the industry as a whole.

About Our Team

Our team comprises industry insiders with extensive experience in computers, semiconductors, games, and consumer electronics. With decades of collective experience, we’re committed to delivering timely, accurate, and engaging news content to our readers.

Background Information


About Google:

Google, founded by Larry Page and Sergey Brin in 1998, is a multinational technology company known for its internet-related services and products. Initially for its search engine, Google has since expanded into various domains including online advertising, cloud computing, software development, and hardware devices. With its innovative approach, Google has introduced influential products such as Google Search, Android OS, Google Maps, and Google Drive. The company's commitment to research and development has led to advancements in artificial intelligence and machine learning.

Google website  Google LinkedIn
Latest Articles about Google




Leave a Reply