Over 1,000 experts call for halt to ‘out-of-control’ AI development

Over 1,000 experts call for halt to ‘out-of-control’ AI development Ryan is a senior editor at TechForge Media with over a decade of experience covering the latest technology and interviewing leading industry figures. He can often be sighted at tech conferences with a strong coffee in one hand and a laptop in the other. If it's geeky, he’s probably into it. Find him on Twitter (@Gadget_Ry) or Mastodon (@gadgetry@techhub.social)


Elon Musk, Steve Wozniak, and over 1,000 other experts have called for a halt to “out-of-control” AI development.

In an open letter, the experts call for a six-month halt on the development of AI technology more powerful than OpenAI’s GPT-4 over the “profound risks to society and humanity”.

The 600-word letter is aimed at AI developers and justifies the call for a pause because “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one — not even their creators — can understand, predict, or reliably control.”

Musk was a co-founder of OpenAI, which was originally created as a nonprofit with the mission of ensuring that AI benefits humanity. Musk resigned from OpenAI’s board in 2018.

OpenAI is now firmly a for-profit and many believe the company has strayed from its original mission. A deep partnership with Microsoft, which has invested tens of billions in the company, appears to have led OpenAI to take more risks.

Musk has publicly questioned OpenAI’s transformation:

Last week, Mozilla announced a new startup – Mozilla.ai – that aims to create an independent and open-source AI ecosystem that addresses society’s most pressing concerns about the rapidly-advancing technology.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts,” the open letter continues.

Earlier today, the UK Government unveiled its whitepaper detailing a so-called “pro-innovation” approach to AI regulation. The framework introduces measures aimed at improving safety and accountability but stops short of creating a dedicated AI regulator like the EU’s approach.

Tim Wright, Partner and specialist tech and AI regulation lawyer at law firm Fladgate, commented on the UK’s whitepaper:

“The regulatory principles set out in the whitepaper simply confirm the Government’s preferred approach which they say will encourage innovation in the space without imposing an undue burden on businesses developing and adopting AI while encouraging fair and ethical use and protecting individuals.

Time will tell if this sector-by-sector approach has the desired effect. What it does do is put the UK on a completely different approach from the EU, which is pushing through a detailed rulebook backed up by a new liability regime and overseen by a single super AI regulator.”

As of writing, the open letter calling for a pause on potentially reckless AI developments has 1,123 signatories.

You can find the open letter here.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The event is co-located with Digital Transformation Week.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.

Tags: , , , , , , , , ,

View Comments
Leave a comment

Leave a Reply

Your email address will not be published. Required fields are marked *