Google removes pledge to not use AI for weapons, surveillance
Google has removed a pledge to abstain from using AI for potentially harmful applications, such as weapons and surveillance, according to the company's updated "AI Principles."
A prior version of the company's AI principles said the company would not pursue "weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people," and "technologies that gather or use information for surveillance violating internationally accepted norms."
Those objectives are no longer displayed on its AI Principles website.
"There's a global competition taking place for AI leadership within an increasingly complex geopolitical landscape," reads a Tuesday blog post co-written by Demis Hassabis, CEO of Google DeepMind. "We believe democracies should lead in AI development, guided by core values like freedom, equality, and respect for human rights."
The company's updated principles reflect Google's growing ambitions to offer its AI technology and services to more users and clients, which has included governments. The change is also in line with increasing rhetoric out of Silicon Valley leaders about a winner-take-all AI race between the U.S. and China, with Palantir's CTO Shyam Sankar saying Monday that "it's going to be a whole-of-nation effort that extends well beyond the DoD in order for us as a nation to win."
The previous version of the company's AI principles said Google would "take into account a broad range of social and economic factors." The new AI principles state Google will "proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides."
In its Tuesday blog post, Google said it will "stay consistent with widely accepted