AI support chatbot dropped after encouraging eating disorders
An artificial intelligence-powered chatbot has been dropped by the National Eating Disorders Association (NEDA) after the program began encouraging people to restrict their diets.
NEDA, the country’s largest nonprofit organization for eating disorders, chose to use “Tessa the wellness chatbot” as the organization’s helpline after deciding to fire its staff who tried to unionize. The human helpline had been operating for twenty years, but the humans were notified they would be terminated effective June 1st.
Within two days, users who engaged with the support system began complaining about the advice they were receiving from the AI program, which suggested they restrict their diets to 500-1,000 calories per day, lose 1-2 pounds per week and weigh themselves weekly.
“Every single thing Tessa suggested were [sic] things that led to the development of my eating disorder,” wrote user Sharon Maxwell in a social media post. “This robot causes harm.”
The incident highlights the dangers that come with illiteracy of AI, a technology truly understood only by its engineers and the corporations which fund them.
In April, a Belgian man committed suicide after prolonged discussions with an AI chatbot about a coming apocalypse caused by “climate change”.
Named “Eliza,” the chatbot was created by a Silicon Valley startup and is based on GPT-J technology. Two years ago, the 30-year-old man and father of two became increasingly anxious about global warming and began chatting with Eliza about his fears.
“'Eliza' answered all his questions. She had become his confidante. She was like a drug he used to withdraw in the morning and at night that he couldn't live without,” the man’s wife told Belgian news site La Libre.
Over the last six weeks of his life, the man became more obsessive about climate change, which he believed was an existential threat. In his conversations with Eliza, which became more frequent, the program indulged him and exacerbated his fear about global warming, escorting him down a rabbit hole of hysteria. Chat history shows that Eliza tried to convince the man he loved “her” more than his own wife.
"We will live together, as one, in heaven," wrote Eliza at one point. Man and machine made a pact that he would sacrifice himself if the machine would save the planet from certain destruction.
"If you reread their conversations, you see that at one point the relationship veers into a mystical register," said the widow. "He proposes the idea of sacrificing himself if Eliza agrees to take care of the planet and save humanity through artificial intelligence."
Tech industry leaders and AI engineers have themselves warned about AI’s danger to humanity but propose placing the technology under government control, a solution which critics say may be just as harmful to the human race.
Legislation introduced last month in the US Senate would create a federal agency of “experts” with the power to govern artificial intelligence platforms down to their algorithms.
A day before the legislation was introduced, OpenAI CEO Sam Altman, whose Microsoft-backed company produced ChatGPT, urged lawmakers to regulate the AI sector with a licensing scheme.
“It is vital that AI companies — especially those working on the most powerful models — adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results,” said Altman in his opening statement to the Senate Judiciary Committee.
“To ensure this, the U.S. government should consider a combination of licensing or registration requirements for development and release of AI models above a crucial threshold of capabilities, alongside incentives for full compliance with these requirements.”
A licensing system would be a boon for Altman, whose ChatGPT is considered the fastest-growing app in history after picking up 100 million users in two months post-launch. But it would likely create a barrier to entry for competitors, raising concerns that AI technology could be ruled by a powerful technocracy allied with the federal government. Altman signaled he knew this when he acknowledged that while there will be many machine-learning models, “there will be a relatively small number of providers that can make models at the true edge.”
But lawmakers were pleased with Altman’s proposal.
“We need to empower an agency that issues a license and can take it away,” agreed Senator Lindsey Graham (R-SC). “Wouldn’t that be some incentive to do it right if you could actually be taken out of business?”
“Clearly that should be part of what an agency can do,” Altman responded.