Julian Vasquez Heilig reminds us that AI is not a neutral objective force in education (or anywhere else).
Artificial Intelligence didn’t fall from the sky.
It wasn’t born in a vacuum or descended from some neutral cloud of innovation. It didn’t arrive pure and untainted, ready to solve all of humanity’s problems. No—AI was trained on us. On our failures. On our history. On our data. On our bias. On the systems we tolerate and the structures we’ve allowed to stand for far too long.
And that should terrify us.
Because when you train artificial intelligence on a world soaked in inequity, saturated with bias, and riddled with disinformation, you don’t get fairness. You get injustice at scale. You don’t get objectivity. You get bias with an interface. You don’t get solutions. You get systems that do harm faster, deeper, and with more plausible deniability than ever before.
Education has become the new testing ground for AI. These models are grading essays, generating lesson plans, designing curricula, screening applicants, analyzing behavior, flagging “at-risk” students, and more. Districts strapped for funding and time are turning to AI not as a tool—but as a replacement. They’re outsourcing humanity. And the consequences will be devastating.
AI doesn’t know your students. It doesn’t know who’s sleeping on a friend’s couch, who’s skipping meals, or who’s surviving domestic violence. It doesn’t know the kid working nights to help pay rent or the undocumented student terrified to go to school. It doesn’t know what it means to be brilliant and Black and told you’re “angry.” It doesn’t know what it means to be the only Indigenous student in a classroom that teaches your people only as a footnote.
But if we’re not careful, AI will end up deciding who’s ready for advanced coursework, who gets flagged for behavioral intervention, who qualifies for scholarships, who is deemed “college material”—and who gets erased. It will be the evolved cousin of everything problematic about high-stakes testing and academic tracking—on steroids.
We are told AI is objective because it’s data-driven. But data is not pure. It reflects our past decisions—our policies, our prejudices, our punishments. A biased world will always produce biased data. And AI, trained on that world, will reproduce its logic—over and over and over again—unless justice and humanity becomes the protocol.
It becomes even more dangerous when AI is used to generate curriculum. These models are trained on a poisoned well—Wikipedia entries altered by ideologues, corporate PR disguised as fact, TikTok conspiracies, Fox News scripts, and millions of webpages that blur the line between history and propaganda. Ask them to write a lesson on civil rights, and they might cite George Wallace. Ask them to explain slavery, and they may describe it as “unpaid labor” or reduce it to “Atlantic triangular trade.” Ask about Palestine or Indigenous sovereignty, and they default to the company’s preferred framing.