AI investor joins tech leaders to warn about AI dangers
Entrepreneur Arram Sabeti last week issued a public warning about the dangers of artificial intelligence, joining an escalating call by tech industry leaders to slow the rapidly accelerating development of AI systems. Specifically, Sabeti’s message focused on artificial general intelligence (AGI) which, instead of being created to perform specific tasks, is designed to be able to perform any task at least as well as a human.
Sabeti, the founder of ZeroCater who says he has invested in at least two AGI companies, shared that “almost all” his friends who work as researchers in AI development companies such as DeepMind, OpenAI, Anthropic, and Google Brain are “worried”.
He tweeted, "I'm scared of AGI. It's confusing how people can be so dismissive of the risks.
“I'm an investor in two AGI companies and friends with dozens of researchers working at DeepMind, OpenAI, Anthropic, and Google Brain. Almost all of them are worried.”
“Imagine building a new type of nuclear reactor that will make free power,” Sabeti wrote in a lengthy tweet. “People are excited, but half of nuclear engineers think there’s at least a 10% chance of an ‘extremely bad’ catastrophe, with safety engineers putting it over 30%.”
The entrepreneur cited a poll from last year of over 700 AI experts showing that about half say there is at least a 10% chance AI can lead to human extinction.
In another poll of 44 people working in AI safety, respondents warned there is a strong chance (30%+) that the future will be “drastically less” than it could have been as a result of weak AI safety research.
Geoffrey Hinton, considered “the godfather of AI,” said the possibility of AI wiping out humanity is “not inconceivable”.
The late scientist Stephen Hawking warned in 2014 that "[t]he development of full artificial intelligence could spell the end of the human race.”
His sentiment was echoed that same year by bIllionaire Elon Musk, himself a founder of AI corporation OpenAI, who cautioned that “with artificial intelligence we are summoning the demon.”
“We need to be super careful with A.I. Potentially more dangerous than nukes,” Musk wrote.
Musk, along with Apple Co-Founder Steve Wozniak, was one of the tech industry leaders who signed an open letter last month calling for a moratorium on the development of AI systems, which they said is “out of control” and “dangerous”.
The letter notes the concern that “recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
“Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?” the letter asks.
“Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 [OpenAI's latest version of ChatGPT]. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”
AI expert Eliezer Yudkowsky, who has spent decades researching artificial intelligence and founded the Machine Intelligence Research Institute (MIRI), responded to the letter Wednesday by urging tech industry captains to “shut it all down” and end AI development indefinitely.
“Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow,” Yudkowsky wrote in a Time article Wednesday. “A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
“If somebody builds a too-powerful AI, under present conditions, I expect that every single member of the human species and all biological life on Earth dies shortly thereafter.”
OpenAI CEO Sam Altman reportedly compared ChatGPT to the Manhattan Project that created the atomic bomb, which he said was a "project on the scale of OpenAI – the level of ambition we aspire to." According to the New York Times, Altman is “certainly determined to see how it all plays out” and, as his mentor says: “He likes power.”