More AI experts warn about human extinction
Artificial intelligence experts and other tech industry leaders have penned another letter warning about the dangers of human extinction from unbridled AI development.
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” read the short statement.
Signatories include the “Godfather of AI” Geoffrey Hinton, OpenAI CEO Sam Altman, OpenAI CTO Mira Murati, Google DeepMind CEO Demis Hassabis, Google DeepMind Chief AGI Scientist and Co-Founder Shane Legg, Microsoft CTO Kevin Scott, Microsoft Chief Scientific Officer Eric Horvitz, and dozens of other AI scientists and executives, along with engineers and academic researchers. The list also included Grimes, celebrated musician and artist and paramour to billionaire Elon Musk.
Musk, along with other tech titans such as Apple co-founder Steve Wozniak, was a signatory on a longer but similar letter in March. The letter, which raised the alarm about the power of AI, called for a six-month moratorium in AI development as “a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.”
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control,” hundreds of tech leaders wrote in the letter.
AI expert Eliezer Yudkowsky, who has spent decades researching artificial intelligence alignment [with human values] and founded the Machine Intelligence Research Institute (MIRI), responded to the letter by urging tech industry captains to “shut it all down” and end AI development indefinitely.
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter,” wrote Yudkowsky.
Last month, “Godfather of AI” Geoffrey Hinton resigned from his position at Google because of the threat AI now poses, adding that a part of him regrets his work in the field.
In a poll from last year of over 700 AI experts, about half say there is at least a 10% chance AI can lead to human extinction.
In another poll of 44 people working in AI safety, respondents warned there is a strong chance (30%+) that the future will be “drastically less” than it could have been as a result of weak AI safety research.
The late scientist Stephen Hawking warned in 2014 that "[t]he development of full artificial intelligence could spell the end of the human race.”
However, many AI experts are calling for the government to take control of the technology, which comes with its own concerns. Legislation currently making its way through the Senate would create a federal agency of “experts” with the power to govern artificial intelligence platforms down to their algorithms.
“We need to empower an agency that issues a license and can take it away,” said Senator Lindsey Graham (R-SC) last month. “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”