As the market floods with AI “agents” who can do a plethora of jobs for humans, Ben Riley looks at some of the alarms being raised in education.
But there are less trivial and more worrying possibilities presented by AI “agents,”and once again education is squarely in the cross-hairs. A great deal of university coursework is now delivered and managed online through learning management systems, which creates fertile and obvious territory for AI “agents” to invade and co-opt. Indeed, recently Perplexity launched a new tool, “Comet AI,” that they’re now explicitly marketing to students as a tool they can use to do their coursework for them. Searching for product-market fit, Perplexity has settled on helping kids cheat.
This has led to many AI-in-education commentators freaking out, and rightfully so:
- Marc Watkins is incensed that Perplexity is callously using “student-aged influencers to portray the most nihilistic depiction of how AI is unfolding in higher education,” and he’s calling for a boycott. I’m in, Marc! But leaving aside the low odds this working, I suspect we’ll just be playing a game of ed-tech whac-a-mole—for every bad actor we knock down, another will pop up.
- Anna Mills echoes Marc’s concerns and issues a plea for AI companies to erect technical barriers to AI agents and education technology. But as Stephen Fitzpatrick notes in the comments, the incentives for tech companies to do this are “not aligned”; I’d say they’re nonexistent so long as institutions of higher education and others fall all over themselves to incorporate AI into the educational process. (If you peruse those comments, you’ll find Stephen pushing back on my quick comment suggesting we consider AI Abolition—I’ll save that idea for a future essay.)
- Josh Brake calls for valuing human relationships to mitigate the moral hazards of AI, which is a beautiful sentiment that unfortunately runs into the hard reality that many (most? all?) young students have grown up in a world where their community is largely mediated digitally. Every time I’ve attended a protest this year, I’ve been struck by how the vast majority of attendees are my age or older (I’ll be 49 next month). The idea of earnest discourse through human solidarity is “cringe.” So while I’m with Josh that we need to work to change these norms, time is not on our side.
- Tressie McMillan Cotton’s justly famous “AI is a mid technology” essay wasn’t about AI agents specifically, but it’s worth recalling her warning that AI acts parasitically to robust learning ecosystems, and threatens to starve their hosts. As she shared, “Every day an Internet ad shows me a way that AI can predict my lecture, transcribe my lecture while a student presumably does something other than listen, annotate the lecture, anticipate essay prompts, research questions, test questions and then, finally, write an assigned paper.” This feels particularly apt with AI agents—perhaps we should call them AI parasites instead?
I will close on the same point I’ve been making since Cognitive Resonance launched last year: AI is a tool of cognitive automation. That’s it, that’s its central value proposition. Once we accept this, we can see that AI parasites agents are simply part of the continuum of capabilities being built into these tools so that humans can avoid thinking. This, sadly, will be continuously seductive to students, because the process of effortful thinking is often unpleasant.