Elon Musk joins other tech titans to call for pause on training AI exceeding GPT-4

Elon Musk, AI experts, and industry leaders have signed an open letter calling for a six-month pause on the development of artificial intelligence systems that exceed OpenAI’s GPT-4 due to potential risks to society and humanity as a whole. 

Apart from Musk, other titans in the world of technology and AI added their signatures to the letter. These include Stability AI CEO Emad Mostaque, DeepMind researchers, and AI pioneers Stuart Russell and Yoshua Bengio. Apple co-founder Steve Wozniak also added his signature to the open letter. However, OpenAI CEO Sam Altman has not signed the open letter, as per a Future of Life spokesperson.

The document highlights potential disruptions to politics and the economy caused by human-competitive AI systems. It also calls for collaboration between developers, policymakers, and regulatory authorities.

“Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable. This confidence must be well justified and increase with the magnitude of a system’s potential effects. 

“OpenAI’s recent statement regarding artificial general intelligence, states that ‘At some point, it may be important to get independent review before starting to train future systems, and for the most advanced efforts to agree to limit the rate of growth of compute used for creating new models.’ We agree. That point is now.

“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

“AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities,” the letter read. 

New York University professor Gary Marcus, a signatory of the letter, shared his sentiments about the matter.

“The letter isn’t perfect, but the spirit is right: we need to slow down until we better understand the ramifications. They can cause serious harm… the big players are becoming increasingly secretive about what they are doing, which makes it hard for society to defend against whatever harms may materialize,” he said. 

A link to the open letter can be accessed here

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x