WEF concerned about effects of AI on children's development
WEF addresses AI pros and cons for kids
While the U.S. House AI Task Force recently proposed teaching AI literacy to children as early as kindergarten, it largely sidestepped the broader risks generative AI poses to children. In contrast, the World Economic Forum (WEF) has tackled these concerns head-on, highlighting both the opportunities and threats AI presents to young users. In its article, "How will generative AI affect children? The need for answers has never been more urgent," the WEF emphasizes the pressing need for safeguards to mitigate AI's profound implications on children’s development, safety, and well-being.
What is generative AI?
Generative AI, the forum explains, is,
a machine-learning subset of AI [that] learns from vast amounts of data to discover patterns and generate new, similar data. It’s often used to produce content mimicking human output – be it text, images or even computer code – but it can also complete complex planning tasks, support the development of new medicines and boost the way robots perform in unprecedented ways.
Children online without supervision cause for concern
Chatbots are a form of generative AI and, as the WEF notes, children are online the mosore than other age groups and often hide the use of generative AI from their teachers and parents.
We don’t know how many children use generative AI but initial surveys suggest it is more than adults. One small poll in the United States revealed that while only 30% of parents had used ChatGPT, 58% of their 12-18-year-old children had done so and hid it from parents and teachers. . . .
AI is already part of children’s lives in the form of recommendation algorithms or automated decision-making systems and the industry embrace of generative AI indicates that it could quickly become a key feature of children’s digital environment. It is embedded in various ways, including via digital and personal assistants and search engine helpers.
Platforms popular with children, like Snapchat, have already integrated AI chatbots. At the same time, Meta plans to add AI agents into its product range used by over 3 billion people daily, including Instagram and WhatsApp.
Benefits of AI for children
Among the benefits of AI for children are,
. . . potential opportunities, such as homework assistance, easy-to-understand explanations of difficult concepts, and personalized learning experiences that can adapt to a child’s learning style and speed. Children can use AI to create art, compose music and write stories and software (with no or low coding skills), fostering creativity.
AI can also help disabled children to "interface and co-create with digital systems in new ways through text, speech or images." It may also "help detect health and developmental issues early."
Synthetic technology poses risks
However, the WEF also notes the issues that must be addressed. One problem is that it is often impossible to distinguish between real faces and AI-generated ones, as the WEF's collage (below) illustrates. This can provide an opportunity for bad actors to harm children. Children, the report notes, are also vulnerable to risks of AI-generated mis/disinformation while their cognitive abilities are still undeveloped.
But generative AI could also be used by bad actors or inadvertently cause harm or society-wide disruptions at the cost of children’s prospects and well-being. Generative AI has been shown to instantly create text-based disinformation indistinguishable from, and more persuasive than, human-generated content. AI-generated images are impossible to tell apart from – and, in some cases, perceived as more trustworthy than – real faces (see Figure 1). These abilities could increase the scale and lower the cost of influence operations. Children are particularly vulnerable to the risks of mis/disinformation as their cognitive capacities are still developing. (Emphasis added.)
Effect on children's development
The WEF also raised concerns about the effect long-term use of AI may have on children's development.
Longer-term usage raises questions for children. For instance, given the human-like tone of chatbots, how can interaction with these systems impact children’s development? Early studies indicate children’s perceptions and attributions of intelligence, cognitive development and social behaviour may be influenced. (Emphasis added.)
The WEF asks how inherent biases in AI may shape a child's worldview and raises concerns about protecting the safety and privacy of data that may be shared.
Also, given the inherent biases in many AI systems, how might these shape a child’s worldview? Experts warn that chatbots claiming to be safe for children may need more rigorous testing. And then, as children interact with generative AI systems and share their personal data in conversation and interactions, what does this mean for children’s privacy and data protection?
While the opportunities and risks reach much further, these examples illustrate the wide-ranging implications of AI. As children will engage with AI systems throughout their lives, the interactions during their formative years could have lasting consequences, underscoring the need for a forward-thinking approach from policymakers, regulatory bodies, AI developers and other stakeholders.
Urgent call for research and protective policies
The agency underscored the need to act urgently to conduct the necessary research and implement the necessary policies.
Policymakers, tech companies and others working to protect children and future generations need to act urgently. They should support research on the impacts of generative AI and engage in foresight – including with children – for better anticipatory governance responses. There needs to be greater transparency, responsible development from generative AI providers . . .
WEF concerned for children's safety?
Although the WEF and UN Secretary-General António Guterres advocate for urgent global action, there may be differing opinions on whether these organizations should spearhead the effort or if others such as national and local governments should be the ones to develop laws and policies to protect their citizens.
The globalist agenda of both the WEF and UN raises concerns that the action they plan to advocate may restrict the freedom of expression of citizens around the world under the guise of protecting children from AI. Since AI may eventually become part of many social media posts and websites, restrictions on AI-generated messages or media that may reach children could be used to ban, shadow ban or demonetize users presenting information at odds with government narratives on public health and other issues.
Related articles:
- House AI Task Force recommends teaching AI to 4-year-olds despite risks
- Teenager takes his life after being urged to do so by a chatbot - the risks of AI 'companions'
- IPads and Chromebooks linked to ADHD and Autism-like symptoms in children
- Mental illness and suicide in children and adolescents linked to smartphones