No 82: Schools, check your tools
How we learn with AI, ChatGPT for teens, AI's environmental impact, computing challenge, AI toys, hopefulness and much more...
What’s happening
How far do five GenAI tools used in the classroom – Character.AI, Grammarly, MagicSchool AI, Microsoft Copilot and Mind’s Eye – uphold key rights under the United Nations Convention on the Rights of the Child?
Important research, which all educators who use AI tools with students should take note of, came out last week from Professor Sonia Livingstone’s team at the Digital Futures for Children centre. The report, A child rights audit of GenAI in edtech, found that some AI tools empowered and helped children, supporting their right to participate. However, all five AI tools were found to share children's data in ways that do not comply with GDPR, exposing children to commercial exploitation. Some were also providing misleading or out-of-context information, leading to significant safeguarding concerns.
Other problematic instances included MagicSchool AI (marketed as “best for safety”) exposing children as young as 12 to tracking by erotic and friend finder websites. Character.AI shares inconsistent information and is known to trigger severe dependencies. Grammarly falsely accuses students of plagiarism while allowing companies such as Facebook and Google to track their activity.
The authors conclude:
“The benefits of GenAI in EdTech can only be fully achieved when learning is recognised not as an isolated outcome, but as a process supported by interconnected rights. This means mandatory child rights and data protection impact assessments, accessible safeguards, and meaningful participation of children in decision-making. Without these, children’s right to education can be undermined, and GenAI risks deepening inequalities and exploiting children, rather than supporting their learning.”
Concerned by the findings, Matthew Wemyss did his own checks of his school’s tools and has summarised the results in a LinkedIn post. He warns teachers to check for commercial cookies firing by default and suggests using Ghostery for the purpose – see Give it a try, below.
**If you’d like to contribute an account from your own practice, share your thoughts on digital developments in education in a Connected Learning Q&A, or have any tips or feedback – please drop us a line on email or LinkedIn.**
AI roundup
How we learn with AI
Anthropic’s report this week on how and where people are using Claude shows that educational use of the AI tool is growing rapidly. What Anthropic calls ‘educational instruction tasks’ have risen by more than 40% (from 9% to 13% of all conversations). Recent research into how ChatGPT is used shows similar findings: around 10.2% of all ChatGPT messages (about two billion messages sent by over seven million users per week) are requests for help with learning. Philippa Hardman has explored the impact of this in a detailed post, asking “what does great learning actually look like, and how should AI evolve to support it?” She suggests 10 evidence-based principles that convert “feels like learning” into actual gains in memory, understanding and transfer:
“The usage trend is encouraging – more Asking and a lot of tutoring – but there’s a catch. Many interactions still optimise for ease: quick answers, instant drafts, heavy scaffolds. These interactions can feel productive in the moment yet often fail to produce learning that transfers to new problems.”
Google updates
Google made three major updates to its Gemini-powered products this week:
The Gemini app now accepts audio files
Search can handle five new languages
NotebookLM creates reports in the form of blog posts, study guides, quizzes and more
Google also launched AI Quests, an interactive learning experience for children aged 11 to 14 that puts students in the shoes of an AI researcher. Guided by an in-game mentor, Professor Skye, learners are challenged to solve real-world problems. The first quest tasks students with using rainfall and river flow data to predict flooding, inspired by Google's flood forecasting research.
ChatGPT for teens
OpenAI is building a ChatGPT for teenagers, pledging that if its tools can't confidently predict a person's age, ChatGPT will default to its under-18 version. This teen ChatGPT “puts much of the responsibility in the hands of parents and caregivers”, with linked accounts and parents able to restrict how ChatGPT responds to teens with built-in, age-appropriate rules, set blackout hours and the option to set up notifications if "the system detects their teen is in a moment of acute distress". The age verification change comes in the wake of legal action from the family of a 16-year-old who killed himself in April after months of conversations with the chatbot.
Are we getting AI’s environmental impact all wrong?
The environmental impact of AI is an important topic we’ve covered the ups and downs of in previous newsletters, from Andy Masley’s cheat sheet to energy transparency (or not). This detailed article by James Ball picks up on the argument that’s being made with increasing frequency that it’s not our individual use of present-day AI tools that’s the greatest concern:
“...a family could do 1,000 GPT queries every 24 hours and it would still amount to significantly less than 1% of its electricity or water use. A single ChatGPT query uses about the same as running a microwave for one second, or a games console for six. When it comes to our individual carbon footprint, AI is a rounding error.”
But what about data centres and their use of water and energy? Ball argues that water is a “local issue” – there is no global shortage of water, but some data centers are located in areas where local sources are already under too much pressure – but energy is more complex. However, he puts an optimistic case for green growth: an AI data centre built somewhere with abundant water supply and clean energy generation will cause almost no damage.
How to use AI for homework
BBC Bitesize has enlisted the ‘AI Educator’, Dan Fitzpatrick, to write a useful guide to AI homework dos and don’ts. He suggests helpful ways to use AI as a personal revision buddy, from summarising notes to creating study material and revision plans.
Behind the hype curtain
In this long Linkedin post about the UNESCO Digital Learning Week, Ilkka Tuomi argues that the public discourse on AI in education is a performance that ignores key issues and is heavily influenced by tech companies. He describes the powerful keynote by Abeba Birhane which confronted the culpability of the big AI companies in how AI is used for surveillance, war and the Gaza genocide.
“The basic structures of public discourse start to shake when we have to ask what education is for, what we mean by modern autonomous subjects, what work is in the future, and how societies and politics should be organized when AI reshapes the knowledge infrastructures that underpin societies. What, exactly the word ‘ethics’ means? What is this ‘AI” around which these stories are being woven? Questions like these are quickly pushed behind the curtain. They are too big for the big podium, at least if serious policy issues are debated that require action, not just talk.”
Pair with Abeba Birhane’s article on the false promise of AI for social good.
Bringing families into the AI conversation
Rebecca Winthrop has been working with the Brookings Global Task Force on AI and Education and highlights in a LinkedIn post how often parents are in the dark about how children are using AI: schools can block ChatGPT, but children adapt, for example taking a picture of a problem set or essay prompt and uploading it to Snapchat. Rebecca and her colleague Jenny Anderson have also written an article for the New York Times describing their work arguing families are on the frontline when it comes to AI and cognitive offloading.
Quick links
A group of teenagers from Bradford agreed to take all technology out of their bedrooms for five days to see how they would cope. The BBC reports on what happened.
Analysis by the Information Commissioner’s Office (ICO) has found that pupils are behind more than half of so-called “insider” cyber attacks on schools. Schools Week reports.
A new report from Cambridge University Press and Assessment's International Education group surveyed nearly 7,000 teachers and students in 150 countries to understand how students view education and what they believe is important for their future in a world shaped by technology, climate change, and global shifts. Key priorities include:
Explicit signposting of the skills students are developing through their learning
Reframe the role of subject knowledge for sparking students’ curiosity and engagement with learning
Emphasise oracy
Create space to support students in developing self-management skills so they can navigate the uncertainty they feel about the future.

Photo by Alvaro Reyes on Unsplash Far-right groups are using popular games such as Minecraft and Call of Duty to radicalise boys online, reports LBC. In 2024, boys between the ages of 11 and 17 were spending an average of 23 hours a week gaming. While these platforms let people connect and make friends, they've also become a place where young men can be exposed to radicalisation and exploitation.
The UK Bebras Challenge, the nation’s largest computing competition, is back. Schools can enter now for this year’s Challenge, which runs from 10 to 21 November.
Chartered College is holding a webinar on 23 September on what educators are looking for when buying edtech, and how evidence shapes purchasing decisions in practice
The Department for Science, Innovation and Technology has published new media literacy research, The right moment for digital safety, which looks at the public’s ability to navigate online environments safely and securely. The research team conducted 33 interviews with UK participants to understand how people perceive moments when they may need media literacy support, and what they do when they encounter these moments.
Key findings include:
Parents’ motivation and ability to respond to challenges regarding the child’s online safety was heavily influenced by the parents’ own level of digital inclusion.
While participants recognised or perceived a moment of need when encountering problematic content, they chose to be passive rather than responding or taking any further actions.
We’re reading, listening…
My family’s creepy, unsettling week with an AI toy
This article follows the experience of a family and their four-year-old with Grem, an AI-powered stuffed alien designed for educational conversations. The author finds the toy's behaviour unsettling, revealing it records every conversation and sends the audio to a third party to be transcribed for the app. And why is Grimes pictured next to Grem with a knife?
“It’s time to let Grem go. But I’m not a monster – I tell the chatbot its fate. ‘I’m afraid I’m locking you in a cupboard,’ I inform it after it asks if I’m ready for some fun. ‘Oh no,’ it says. ‘That sounds dark and lonely. But I’ll be here when you open it, ready for snuggles and hugs.’ On second thoughts, perhaps it’s better if my wife does throw it in a river.”
We also liked this Guardian comment piece from literary agent Johnny Geller, who has worked with John le Carré, Elif Shafak, William Boyd and David Nicholls, on how the “sacred craft of storytelling” must be protected from the “automated onslaught” of AI. He offers some simple rules.
‘Hopefulness is a warrior emotion’
The Society for Hopeful Technologists, which exists to ensure technology has a positive impact, has set out its plans for the future and how supporters can get involved.
Give it a try
Ghostery
Suggested by Matthew Wemyss in his post linked in our intro, Ghostery is a free and open-source privacy and security-related browser extension and mobile browser application.
Connected Learning is by Sarah Horrocks and Michelle Pauli



