AI and digital education news
Which AI tool roundup, to prompt or not to prompt, terrible research, Roblox gardens, sounding like ChatGPT, AI benchmarking, try hearing colour and much more...
What’s happening
It’s straight to the news and views this week - enjoy!
AI roundup
What’s in AI for secondary teachers?
The fab Miles Berry has shared his recent lecture to secondary PGCE trainee teachers about how to use AI in education. He also makes a plea for new teacher standards for digital literacy.
PISA 2029 Introduces AI and media literacy
The OECD’s latest shift in preparation for the 2029 assessments will introduce AI and media literacy and put critical thinking, algorithmic awareness and media evaluation at the heart of global education.
Pair with this Civics of Technology essay on how AI literacy needs to be reimagined: “Many AI literacy programs are billed as preparing all students for a future where AI is ubiquitous. But to truly empower all students, any form of literacy must be grounded in history – particularly history told from the viewpoints of non-dominant groups.”
Using AI right now
Ethan Mollick has produced one of his handy quick guides to using AI right now. Firstly, he suggests that, rather than spending lots of time trialling different models, “For most people who want to use AI seriously, you should pick one of three systems: Claude from Anthropic, Google’s Gemini, and OpenAI’s ChatGPT.” Then, spend the time you’ve saved learning how to use the tool effectively – and he offers various tips and tricks. Importantly for educators (and everyone concerned about privacy, GDPR and copyright, which should, in fact, be everyone), he notes that, “Claude does not train future AI models on your data, but Gemini and ChatGPT might, if you are not using a corporate or educational version of the system.” And what about prompt engineering?
“It used to be that the details of your prompts mattered a lot, but the most recent AI models I suggested can often figure out what you want without the need for complex prompts. As a result, many of the tips and tricks you see online for prompting are no longer as important for most people.”
Beyond the hype
Phil Hardman begs to differ – on prompt engineering anyway – in her dive into 18 recent research papers on how to use AI in instructional design, claiming that designers who invest in prompt engineering achieve 58% better results and it is now a core professional skill, not a technical add-on. She also says the research suggests that AI excels at automating routine tasks (65% time savings in lesson planning, 95% in assessment generation) but requires deep human oversight and insight for quality and context.
Life is short, this paper is terrible
Doug Belshaw takes a look at the MIT research paper on the use of LLMs for essay writing that’s caused a kerfuffle. But, as he says:
“I can guarantee you that most people who are using this paper to justify the position that ‘the use of LLMs is a bad thing’ haven’t even read the proper abstract, never mind the full paper.”
Belshaw notes, among other issues, that the research is only based on 18 participants, features a dreadful research design and hasn’t been peer reviewed. He concludes, “life is short and this paper is terrible. I’ll continue to use LLMs in my everyday work, and have zero issues with students using them to complete badly-designed assessment tasks.”
The problem with AI benchmarking
The Algorithm has been investigating the troubled world of evaluating AI systems and how our scoreboard for AI no longer reflects what we really want to measure – benchmarks have been gamed, maxed out and gone stale.
“I know your secrets.”
A recent study by the AI company Anthropic found that many of the world's leading large language models will resort to harmful behaviours like blackmail when placed in specific, high-stakes scenarios. The research tested 16 top models from companies including OpenAI, Google, and Meta.
Supporting metacognition with AI
Rose Luckin’s holding a one-hour online course for senior leaders, teachers, and digital leads exploring AI support for metacognitive instruction in school. Metacognition - the awareness of one's own learning - and how AI can support and enhance this vital skill with data, insights and analytics, personalisation and scalability. It’s on Friday 11 July 07:00, £10 – find out more.
Quick links
‘Mr P’, or Lee Parkinson, is a household name for primary teachers, even if senior leaders haven't heard of him. Last month's Teacher Tapp poll named him the UK's top "education influencer." Parkinson, who began writing a ICT/computing curriculum blog when he became the lead teacher in his school now has nearly half a million Facebook followers, and is also a hit on Instagram (272,000 followers) and Tiktok (214,000). A podcast he runs with his teaching assistant brother (the other Mr P), has amassed 7,000,000 listens. His content highlights the humorous and often bizarre realities of teaching. He's also a fierce critic of Ofsted, famously calling it a "cult," and regularly dissects education news through his videos.
A quiet revolution is blooming in the gaming world with "Grow a Garden". This virtual cultivation Roblox game recently captivated over 16 million players, largely children, even surpassing Fortnite's peak concurrent users. BBC News asks what is the allure of nurturing digital plants, and could this phenomenon cultivate a new passion for real-world gardening? (But also raises questions about gaming, paid rewards and financial literacy.)
Thanks to Cliff Manning for sharing Offline Factors Influencing the Online Safety of Adolescents with Family Vulnerabilities by Adrienne Katz and Hannah May Brett. The paper’s authors say that online safety guidance is frequently delivered as a specialist technology issue without considering adolescents’ home lives, offline vulnerabilities, or wellbeing. Yet, while the digital world offers connection, autonomy and entertainment, vulnerable teens also encounter more violent content, sexual exploitation, and content concerning body image, self-harm or suicide than their non-vulnerable peers.
An Ofcom study shows that 8% of children aged eight to 14 have viewed online pornography.
The UK Skills Minister rejected a social media ban for under-16s, citing mixed evidence on its impact.
A Guardian article advises internet users to change passwords after 16bn logins were exposed – but some experts disagree, claiming that adding complexity to security protocols leads to lower compliance (password1 -> password2, instead of using one complex password), the storage of previous passwords to avoid reuse can lead to pattern recognition in case of a dump, and frequent changes are unnecessary.
We’re reading, listening…
Adolescence as the wild west
This powerful Guardian article reviews Lauren Greenfield's documentary series, Social Studies. Filmed over a school year in Los Angeles, the five-part series unflinchingly tracks the lives of teenagers both on and offline, revealing how their digital world has become a dangerous and unregulated space. A significant point is the failure of parents and society's adults, who often come across as absent or overwhelmed, unable to understand or regulate their children's online worlds.
You sound like ChatGPT
Interesting delve* from The Verge into how AI is flattening out linguistic difference and idiosyncrasy:
“When everyone around us starts to sound ‘correct,’ we lose the verbal stumbles, regional idioms, and off-kilter phrases that signal vulnerability, authenticity, and personhood.”
*Delve. A word Michelle stopped using in writing some time ago as it is an AI signifier: “an academic shibboleth, a neon sign in the middle of every conversation flashing ChatGPT was here.” However, there is hope. “Early backlash signals, like academics avoiding ‘delve’ and people actively trying not to sound like AI, suggests we may self-regulate against homogenisation,“ says The Verge. I still miss using delve, though. (And Ethan Mollick has given up his beloved em-dash.)
Pair with this interesting look at how video games developers are being accused of using AI, even when they aren’t.
The reading crisis we’re not talking about
From writing to reading, and on File on Four, Jeppe Klitgaard Stricker argues that use of LLMs mean that we are being retrained in how to engage with language:
“To read deeply in an AI-saturated world will require not just willpower and dedication, but institutional imagination. It means carving out curricular space for slowness. It means teaching reading not as information intake, but as a form of resistance - a way of staying curious and alert while pushing back against the frictionless logic of automated language.”
Machine readable
AI is changing the way we analyse and write about history, as Steven Johnson explains in this Substack post looking at how he used NotebookLM to investigate early ideas for a project.
“NotebookLM is effectively functioning as a conduit between my knowledge/creativity and the knowledge stored in the source material: stress-testing speculative ideas I have, fact-checking, helping me see patterns in the material, reminding me of things that I read but have forgotten.”
As well as going into some detail on his use of the AI tool, he also engages with criticisms of his process.
Is AI productivity worth our humanity?
Political philosopher Michael Sandel joined Tristan Harris on the Undivided Attention podcast from the Center for Humane Technology to discuss AI's impact on the purpose of education and work. Drawing on his insights into justice and merit, Sandel contended that AI-driven abundance could hollow out society, forcing us to confront what it means to be human when our work role in society vanishes.
Give it a try
What if you could hear colour?
This intriguing Google Arts and Culture experiment, Play a Kadinsky, invites us to hear what Kandinsky might have heard when painting “Yellow-Red-Blue'' by bringing to life his theories on his synesthesia and abstract art, with the help of machine learning.
Connected Learning is by Sarah Horrocks and Michelle Pauli