Major AI companies form group to research, keep control of AI


logos of four companies
Enlarge / The four companies say they launched the Frontier Model Forum to ensure “the safe and responsible development of frontier AI models.”
Financial Times

reader comments
51 with

Four of the world’s most advanced artificial intelligence companies have formed a group to research increasingly powerful AI and establish best practices for controlling it, as public anxiety and regulatory scrutiny over the impact of the technology increases.

On Wednesday, Anthropic, Google, Microsoft, and OpenAI launched the Frontier Model Forum, with the aim of “ensuring the safe and responsible development of frontier AI models.”

In recent months, the US companies have rolled out increasingly powerful AI tools that produce original content in image, text, or video form by drawing on a bank of existing material. The developments have raised concerns about copyright infringement, privacy breaches and that AI could ultimately replace humans in a range of jobs.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Brad Smith, vice-chair and president of Microsoft. “This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.”

Membership of the forum is limited only to the handful of companies building “large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models,” according to its founders.

That suggests its work will center on the potential risks stemming from considerably more powerful AI, as opposed to answering questions around copyright, data protection, and privacy that are pertinent to regulators today.

© 2023 The Financial Times Ltd. All rights reserved Not to be redistributed, copied, or modified in any way.

Article Tags:
Article Categories:
Technology