Our mission: To preserve, promote, improve and strengthen public schools for both current and future generations of students.

Sue Kingery Woltanski questions the push for getting AI into classrooms.

The excitement surrounding AI in education confuses me. We are just emerging from a nationwide experiment in virtual learning — the pandemic — which made it abundantly clear that most students thrive with teachers in classrooms, not on screens. We are also in the midst of a student mental health crisis in which digital social media is a major culprit. School districts are being urged to ban cell phones. Parents are begging for less screen time, concerned about its effects on learning, social development, health, and well-being.

So why does anyone believe that more screens — now with AI — will improve K–12 education?


AI will not revolutionize education

Last summer, a Have You Heard podcast episode called “Don’t Buy the AI Hype” introduced me to the work of Ben Riley, a cognitive scientist and founder of Cognitive Resonance. His organization helps people understand how generative AI works, what it can and cannot do, and how to make informed decisions about its use. Its tagline is fitting: “Building Human Knowledge to Halt AI Hype.”

Riley’s presentation at the 2025 ASU+GSV Summit — the largest gathering of ed-tech investment and AI hype in the world, the same event where Secretary of Education Lynda McMahon repeatedly referred to AI as “A-1” — was titled AI Will Not Revolutionize Education.” It is well worth watching. Riley explains what AI actually is and why claims of “transformation” far outpace the evidence. The current research base showing AI improves learning outcomes at scale is weak. He warns that hallucinations make AI a risky tutoring tool and that rapid adoption could widen existing inequities.

Riley emphasizes that the pressure to “keep up” can push districts toward hasty adoption without sound pedagogy, infrastructure, or evidence. Instead of chasing a transformational narrative, he urges schools to prioritize teacher expertise, human relationships, and solid instruction — treating AI as a tool, not a replacement.

[The fear of being left behind appears to be driving FSBA’s “Future Focused” agenda.]

The same Have You Heard podcast reintroduced me to Audrey Watters, a writer who examines the intersection of education, technology, and politics. She notes that for generations, education technology has promised big and delivered little — and AI will be no different. She also highlights the connections between tech billionaires pushing AI and those pushing privatization and dismantling of public education.

Watter’s blog, “Second Breakfast,” — particularly the piece “LLM as MLM,”— led me to read Emily Bender and Alex Hanna’s book The AI Con. The book teaches the reader to identify the AI Hype and not fall for the con. They write:

“Artificial intelligence, if we’re being frank, is a con, a bill of goods you are being sold to line someone’s pockets. A few major well-placed players are poised to accumulate significant wealth by extracting value from other people’s creative work, personal data, or labor, and replacing quality services with artificial facsimiles.”

In education, the “productivity” pitch translates into fewer teachers, more automation, and algorithm-driven learning — not better outcomes. Public schools are the “quality service” that will replaced with chat bots and other “artificial facsimiles.”

Attempting to hear “the other side”

Concerned I was only hearing one side, I enrolled in an eight-week certification course at the University of North Florida called “AI for Work and Life.” It promised foundational understanding and hands-on experience with AI. We learned how easy it is to generate low-quality (crappy) videos. One session discussed ethical concerns — privacy, equity, copyright — but deliberately avoided the environmental impacts of data centers, instead imagining that AI might solve those very problems. (They were also upbeat about AI’s ability to solve poverty.) For the record, I am now certified — but not convinced.

I also requested a demonstration of the chatbots being piloted in my district. I was shown how Khan Academy’s Khanmigo can help students prepare for the SAT. Before using it, students must acknowledge that the chatbot “doesn’t always tell you the truth.” I wondered aloud: Why would we hire a tutor that admits it lies? No one wants that.

In September, I attended a mini-conference for school board members and district staff, sponsored largely by ed-tech vendors. Speakers encouraged us to advocate for state and federal funding for the data centers needed to support the “AI revolution.” At one point, we were asked to brainstorm what budget cuts could fund these tools. Fewer books? Larger class sizes?

Instead of asking how to fund AI, what we should be asking:

What are the benefits of not adopting AI in the classroom?
If money were no object, would we invest in books and teachers — or in AI-driven data collection and ed-tech products?

Meanwhile…

Meanwhile, Governor DeSantis has announced plans to regulate artificial intelligence, citing concern about its rapid expansion and potential impacts, particularly in education. The Florida Citizens Alliance — architects of Florida’s book ban laws — is preparing to push for guardrails including parental opt-ins, strict data controls, and policies requiring districts to contract only with AI providers who embrace a “Western civilization (biblical) worldview.” [Ah… Floriduh…)

At the same time, tech companies are pouring billions into AI chips and data centers while financial analysts warn of an AI bubble. The recent stock market rally has been fueled almost entirely by the biggest tech firms. If the market experiences a correction similar to the dot-com collapse, the consequences could be global.

So what is a Florida school board member to do?

First: stop listening to tech giants/broligarchs insisting that AI will revolutionize education. Remember that education is fundamentally a human endeavor. Focus on connecting students with great teachers. If AI truly is the future, districts can adopt it later — after solid evidence, proven pedagogy, and meaningful guardrails exist.

Policy making around responsible use of AI will certainly be necessary. But that is different from embracing AI as the next great educational transformation.

Above all: be a skeptic. Ask good questions. Recognize the hype.

As we navigate AI’s expansion, our charge is simple: protect the relationships that make learning possible. Every policy, every pilot, every tool we introduce should answer one question — does this help students and teachers connect more deeply, or does it get in the way? By keeping human connection at the center, we ensure that technology serves education, not the other way around.