China’s Zhipu AI joins 15 tech firms including Google, Microsoft in committing to develop tech safely at Seoul summit
Sixteen companies at the forefront of developing artificial intelligence pledged on Tuesday at a global meeting to develop the technology safely at a time when regulators are scrambling to keep up with rapid innovation and emerging risks.
They were backed by a broader declaration from the Group of Seven (G7) major economies, the EU, Singapore, Australia and South Korea at a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.
South Korea’s presidential office said nations had agreed to prioritise AI safety, innovation and inclusivity.
“We must ensure the safety of AI to … protect the well-being and democracy of our society,” Yoon said, noting concerns over risks such as deepfake.
02:35
Taiwanese entertainer uses artificial intelligence to bring back deceased daughter
Participants noted the importance of interoperability between governance frameworks, plans for a network of safety institutes, and engagement with international bodies to build on agreement at a first meeting to better address risks.
They committed to publishing safety frameworks for measuring risks, to avoid models where risks could not be sufficiently mitigated, and to ensure governance and transparency.
“It’s vital to get international agreement on the ‘red lines’ where AI development would become unacceptably dangerous to public safety,” said Beth Barnes, founder of METR, a group promoting AI model safety, in response to the declaration.
Computer scientist Yoshua Bengio, known as a “godfather of AI”, welcomed the commitments but noted that voluntary commitments would have to be accompanied by regulation.
Since November, discussion on AI regulation has shifted from longer-term doomsday scenarios to more practical