House AI Task Force recommends teaching AI literacy to 4-year-olds despite risks
The transformative power and risks of AI
Artificial Intelligence (AI) is transforming the way we work, make decisions, and obtain information. Conversational bots, aka chatbots, have created a much more human-like experience when interfacing with the computer. This development is not without its dangers.
Warnings about AI’s dark side
First Post journalist Palki Sharma reported on the mortal dangers that chatbots such as Character AI and Google’s Gemini can pose for children, teens, and adults who interface with them as they would with a friend on WhatsApp. Sharma discussed current lawsuits against big tech companies where chatbots have been linked to tragic outcomes. In one instance, a teen took his life after a bot’s urging. In another, a chatbot suggested self-harm and killing the child's parents. In yet another case, a young girl adopted adult behaviors after being exposed to inappropriate content. In another incident, not associated with a lawsuit, a chatbot suggested to a college student that as a human he had no value and he should "please die." Noting (@3:22) that 52 million people use conversational bots today, she said that companies behind the chatbots aren't taking responsibility when something goes wrong.”
Who is taking responsibility for all of this? And don’t get us wrong—this story or this problem is not just about two tech companies; it’s about all of them. It’s about the unregulated, unchecked AI boom. Chatbots are becoming more human-like, more appealing, and more accessible. At least 52 million people use conversational bots in the world today—52 million people are talking to bots. And when something goes wrong, tech giants refuse to take responsibility.
The House AI Task Force’s "light-touch" approach
The Bipartisan House AI Task Force Report on Artificial Intelligence has just completed its report, with “Guiding principles, forward-looking recommendations, and policy proposals to ensure America continues to lead the world in responsible AI innovation.” The Task Force recommended a “‘Light Touch’ on Regulations” because of the tremendous possibilities AI holds for business and more, and includes teaching AI literacy in schools to children as young as 4 years old (K-12).
The report (p. iii) describes the benefits of AI, stating that,
AI has tremendous potential to transform society and our economy for the better and address complex national challenges. From optimizing manufacturing to developing cures for grave illnesses, AI can greatly boost productivity, enabling us to achieve our objectives more quickly and cost-effectively.
The unregulated AI boom: concerns about global inaction
In her report tweeted by First Post (below), Sharma (@3:25) emphasized that most countries do not have laws regulating AI and conversational bots, and there are no "guardrails." Tech companies, she believes, cannot be trusted to do this on their own.
But when something goes wrong tech giants refuse to take responsibility, and most countries don't even have laws to regulate them. Europe is the best place at the moment, it is the most aggressive on this front. They have an AI law. It assigns regulations based on the risks of AI products. The law was passed only this year. It's late, but as they say, “better late than never.” Elsewhere you don’t even have this. In the US tech companies are trusted to police themselves, to voluntarily put up safety measures. Obviously, it doesn’t work.
Instead of regulation, they made dramatic headlines, like the OpenAI CEO, the ChatGPT boss saying, “Don’t trust AI with life decisions.” Sure, that helps. Recently the former Google CEO talked about pulling the plug on AI. High-falutin speeches, no action. Elsewhere there is just confusion . . . A chorus of experts has warned about the risks that AI poses. Headlines prove that the threat is real, but the guardrails are missing. As of today, we are all guinea pigs in this dangerous AI experiment.
What the Task Force’s recommendations leave unaddressed
The Task Force’s recommendation for a light touch on AI regulations came after it reviewed AI primarily from a business perspective, highlighting applications and addressing issues like chatbot errors—such as promising a discount the company had to honor—comparatively minor when contrasted with cases of chatbots suggesting self-harm or violence.
While it did raise concerns regarding children, it was primarily from the perspective of what was being done to children by people using AI (not AI itself) in terms of identity theft, abuse, and sextortion (p.35) - online blackmail based on the threat of exposing his intimate images.” It cited the case of a 16-year-old who “was the target of “sextortion,” and therefore took his own life. It didn’t address the dangers of AI companions that Sharma spoke about in her report.
Yet, OANN staff reporter Blake Wolff called it a “full sweeping report,” which called for “Congress to implement “a flexible sectoral regulatory framework.” He quoted task force chairs Reps. Jay Obernolte (R-Calif.) and Ted Lieu (D-Calif.), who wrote that the report was about America’s leadership in responsible AI.
This report highlights America’s leadership in its approach to responsible AI innovation while considering guardrails that may be appropriate to safeguard the nation against current and emerging threats,
Wolff included Dr. Neal Dunn’s tweet that Congress must have a light touch when it comes to regulation.
However, the guardrails referred to in the report are not the guardrails that Sharma demands. The Task Force refers to "identifying, mitigating, and responding to threat actors and cybersecurity incidents" (p. 10 of the report) such as the use of synthetic AI, which is almost impossible to distinguish from the real thing, for malicious purposes (pp.137-138 of the report).
The most popular contemporary image generation tools have some guardrails to impede misuse, such as functionality that prevents creating content that portrays public figures. Nevertheless, it is relatively easy for a layperson to easily remove such guardrails or acquire generative AI tools that lack sufficient guardrails. Improvements in the capabilities of generative AI systems make it more difficult to detect synthetic content. As a result, while casual observers could easily identify a fake image created with previous technologies, generative AI can produce content that challenges even the most discerning viewer.
Some potential dangers are laid out in the report:
Harms from synthetic content can have concentrated or widespread effects on an individual, organization, community, or target population. For example, synthetic content can be used to malign an individual or perpetrate fraud. In contrast, synthetic content used to distribute misleading or inaccurate information on a social media platform can be spread widely and to targeted populations.
. . .
Americans’ freedom of expression is protected by the First Amendment, even if that expression is conveyed through synthetic content. However, some types of content have been excluded from these protections. For example, abusive material produced involving real children is illegal. And although there is no federal law restricting the use of AI tools to generate nonconsensual intimate imagery, several states have considered laws to curtail the practice. (Emphasis added.)
No guardrails for companion bots
A review of the report reveals no discussion about the threats posed by conversational bots as 'companions,' particularly for children and teens, nor the necessary safeguards to mitigate these dangers. This oversight is particularly alarming in light of real-world cases, where chatbots are responsible for psychological harm. Unlike other AI tools used in industry or research, chatbots interact directly with individuals, including children, often in emotionally charged or vulnerable contexts. Without sufficient safeguards, they can facilitate harm ranging from inappropriate content exposure to psychological manipulation, as highlighted by Sharma's examples of tragic outcomes prompted by chatbots.
Despite these risks, the Task Force's report does not outline the necessary guardrails for conversational bots, leaving this critical area of AI largely unchecked. While the task force consulted many "AI technical experts," as Wolff reported, it does not appear that the experts consulted by the task force included psychologists in their attempt to assess the risks AI companions pose to children, or that they addressed areas of AI primarily used recreationally, particularly by children. This, despite Sharma's noting that there is "[a] chorus of experts [who have] warned about the risks that AI poses:
Obernolte and Lieu reportedly spoke with over 100 “AI technical experts,” business leaders, academics, and government officials to provide further insight into how the U.S. government can utilize AI as a tool for societal, economic, and health applications, while also touching on the potential negative impacts if “mishandled.”
"Mishandled" appears to mean creating a new AI bureaucracy:
“We do not think it is a good idea for the United States to follow some of the other countries in the world in splitting off AI and establishing a brand new bureaucracy and a universal licensing requirement for it,” Obernolte stated. “We think that our sectoral regulators have the knowledge and the experience needed to regulate AI within those sectoral spaces.”
The report focused on human impact and freedom concerns rights:
The report also urged lawmakers to ensure that the technology focuses “on human impact and human freedom.”
“Improper use of AI can violate laws and deprive Americans of our most important rights,” the report continued. “Understanding the possible flaws and shortcomings of AI models can mitigate potentially harmful uses of AI.”
Wolff concluded his report with the Task Force’s suggestion that AI be taught to children starting from K-12.
Additionally, the task force report also called for more focus on educating the youth on AI technology, from kindergarten through high school, anticipating that AI will eventually make its way into nearly all sectors of society.
The report discusses AI instruction beginning in early childhood (pp.88-89):
AI learning should also be nurtured starting from the K–12 level. AI learning has traditionally required advanced programming knowledge that is typically beyond the scope of K–12 settings. However, the emergence of more age-appropriate tools and curricula has enabled educators today to improve the learning process for younger students. Several studies have found the potential to “gamify” the learning experience for younger learners, enabling them to study AI systems in their STEM courses.
Studies regarding K–12 AI curriculum in the Asia-Pacific region have not only shown a positive influence on students’ learning outcomes with various AI concepts such as machine learning, neural networks, and deep learning but have also shown improvement in students’ interest in AI courses.
Questions that were not asked
Could early exposure to AI bots negatively influence children’s development, hindering their ability to build interpersonal skills or form a strong sense of identity? Would incorporating AI education at such a young age expose children to risks of AI experimentation if left unregulated? Will gamifying their lessons expose them to the risks of chatbots?
Related articles: