Google, Meta, OpenAI pledge safe development of AI at Seoul summit
SEOUL (AP) -- The world's leading artificial intelligence companies pledged at the start of a mini-summit on AI to develop the technology safely, including pulling the plug if they can't rein in the most extreme risks.
World leaders are expected to hammer out further agreements on artificial intelligence as they gathered virtually Tuesday to discuss AI's potential risks but also ways to promote its benefits and innovation.
The AI Seoul Summit is a low-key follow-up to November's high-profile AI Safety Summit at Bletchley Park in the United Kingdom, where participating countries agreed to work together to contain the potentially "catastrophic" risks posed by breakneck advances in AI.
The two-day meeting -- co-hosted by the South Korean and U.K. governments -- also comes as major tech companies like Meta, OpenAI and Google roll out the latest versions of their AI models.
They're among 16 AI companies that made voluntary commitments to AI safety as the talks got underway, according to a British government announcement. The companies, which also include Amazon, Microsoft, France's Mistral AI, China's Zhipu.ai and G42 of the United Arab Emirates, vowed to ensure safety of their most cutting-edge AI models with promises of accountable governance and public transparency.
The pledge includes publishing safety frameworks setting out how they will measure risks of these models. In extreme cases where risks are severe and "intolerable," AI companies will have to hit the kill switch and stop developing or deploying their models and systems if they can't mitigate the risks.
Since the U.K. meeting last year, the AI industry has "increasingly focused on the most pressing concerns, including mis- and disinformation, data security, bias and