AI and early adopter schools
Gemini for Education expands, Ofsted and AI, cultural hallucination biases, Slow AI, usable futures, an algorithmic mirror for teens and much more...
What’s happening
It’s straight to the news and views again this week - enjoy!
AI roundup
Google expands AI in the classroom
Gemini for Education is now the default experience for all Google education accounts, following announcements at the ISTE edtech conference this week, with the Gemini AI suite for educators now available for free to all Google Workspace for Education accounts. There are more than 30 new features, including Google giving teachers the ability to create interactive study guides using the AI research tool Notebook LM. Crucially, as Dan Fitzpatrick points out in Forbes, Google reiterated that it won’t use educational account data to train its models.
Ofsted and providers’ AI use
Ofsted has published new guidance about how its inspectors look at AI during inspection and regulation. The guidance states that inspectors do not look at AI as a stand-alone part of the inspections, and do not directly evaluate the use of AI, nor of any AI tool. However, inspectors can consider the impact that the use of AI has on the outcomes and experiences of children and learners. The guidance was informed by research into how 21 “early adopter” schools, colleges and MATs are integrating AI into teaching, learning and admin. They found that:
“All the leaders we spoke to were curious and cautious in their adoption of AI. The AI landscape is still changing rapidly, and they need to balance innovation and risk. Typically, leaders saw beyond AI as a shiny new product, and none viewed it as a cure-all for education. Leaders had found ways to integrate AI into processes that they felt were likely to be beneficial for staff, learners and pupils in their college or school.”
It is also significant that senior leaders in these schools and colleges made sure there was a clear vision for AI which prioritised safe and ethical use by staff and pupils. They tended to take initial small steps to explore the potential of AI before adopting it across their school, college or MAT. Leaders frequently made sure that teachers had the space and time to experiment and learn how AI could support and enhance their own practice. This created a culture of openness and trust that encouraged innovation.
AI in schools: hype, help, harm?
In this in-depth conversation on Teachers’ Talk Radio, Professor Becky Allen talks to Ben White about how secondary students are using or not using LLMs, the purpose of homework and assessment, the development of AI tutors in schools and preparing young people for a workplace when it is facing significant disruption from AI.
LLMs have ‘killed off coursework’
Alex Quigley continues this theme in this TES article where he asks whether the days of relying on coursework as a component of A-level assessment may be numbered in the age of AI. But what are the alternatives?
Pair with this Gray Area podcast episode in which journalist James Walsh talks about his recent article, Everyone is cheating their way through college.
AI and early years
Early childhood education rarely features in discussions about artificial intelligence, writes Sonia Whiteley. But she argues that the issue isn't about whether AI belongs in early learning settings. It’s about the next generation of early childhood teachers, and whether they are prepared for the realities of an AI-mediated world.
Whose job is safe from AI?
Will AI make your job more fun or more annoying – or extinct? The inimitable Tim Harford explores in the FT some of the factors that might make a difference, noting:
“most jobs are not random collections of unrelated tasks. They are bundles of tasks that are most efficiently done by the same person for a variety of unmysterious reasons. Remove some tasks from the bundle and the rest of the job changes.”
But pair with this Guardian story stating that the number of new entry-level UK jobs has dropped by almost a third since the launch of ChatGPT and that big companies are increasingly relying on AI for jobs once reserved for humans.
Relationships are not a luxury
In this piece for Brookings, Rebecca Winthrop and Isabelle Hau explore the question of what happens when AI chatbots replace real human connection. They argue that relational intelligence should be as central to education, health, and AI development as academic content or technical skill. Brookings has a Global Taskforce on AI in Education, which is currently conducting a ‘pre-mortem’ to anticipate potential negative consequences of generative AI in education. It will report in January 2026.
Crescents and crosses
Maha Bali, an academic based in Egypt, discusses ‘AI cultural hallucination bias’, an issue that’s too often overlooked in western discourse. She notes:
“I come across biases and hallucinations more often than the average white, male, Western, Anglo person. This would explain why I am more passionately frustrated by this than the average person I speak to (not locally). For example, would YOU notice that QuickDraw expects a cross on hospital buildings, rather than a crescent? Would YOU try to draw a crescent on the hospital building and see if it understands you (hint: it doesn’t).”
Quick links
Parenting for a Digital Future looks at what’s missing from the latest Ofcom guidance on a safer online life for women and girls: standards setting and safety by design for women.
The DfE’s new version of its keeping children safe in education (KCSIE) guidance will only feature 'technical changes', despite previous promises of 'substantive' updates.
Why not start a school radio club? It’s more cutting edge than it sounds and some children may have been inspired by the new Pixar film Elio (colanders on heads not obligatory).
Or why set up a Code Club? This Raspberry Pi campaign has all the info you need.
Is it really the phones? The New Yorker takes a review of Matt Richtel’s new book How We Grow Up as a starting point for an exploration of the polarised views around teens’ use of technology.
The Child Financial Harms consortium has produced free CPD materials and lesson plans for teachers on online financial harms
We mentioned the two Mr Ps last week, here they share more thoughts on stressed-out teachers.
We’re reading, listening…
Slow AI
Written by Sam Illingworth (who Michelle worked with when he was an LSE HE Blog Fellow and did a great series on using poetry in pedagogy), Slow AI is a weekly prompt-led series with something to try, notice, or make. This week’s prompt is simple: teach an AI tool something it cannot truly understand. A sensory detail, a private memory – something specific that a model cannot retrieve or reduce. It is an invitation to share instead of prompt, to offer instead of optimise.
Here’s the first example in the comments, from MB:
“On hot days, I always think back to when I was a kid, how the tarmac on the street was really soft and you could stick your fingers in it, make shapes and patterns. It makes me laugh now, the thought of just being able to sit in the road as a kid 'playing' with the street, apparently safely.”
followed by fascinating responses from the AI.
We need usable futures on the only home we’ve ever known
Great essay that explores how Silicon Valley is trapped in outdated science fiction – a new version of the Gernsback Continuum – in which it is collectively haunted by the visions of an imaginary future of endless expansion that didn’t happen and never will. Our escape route, says Henry Farrell, is to acknowledge the physical and social limits that we can’t escape, and to try to construct better futures within those limits.
Give it a try
Algorithmic Mirror
Got a teen/s aged 12-16? Get involved with the Algorithmic Mirror research study. Algorithmic Mirror is a tool being developed by MIT and Oxford Child Centred AI lab to research how algorithms create hidden profiles of young people based on their online activity.
Connected Learning is by Sarah Horrocks and Michelle Pauli





