Training Students Starts With Training Teachers
That shift starts with teachers and educational leaders.
If we expect educators to guide students through an AI-saturated world, we must give them time, training and trust. Teachers must understand how AI tools work, where they fall short, how bias can show up in outputs, and how hallucinations and misinformation affect confidence. Prompt engineering is not a parlor trick. It is a literacy skill. Yet today, too many teachers are expected to figure it out on their own on nights and weekends, while still being held accountable to outdated grading systems.
I know this because I live it. As a high school teacher, I’ve spent hundreds of hours educating myself on AI, helped to write district-level AI guidelines, built microcredentials for teachers and students, and served as a trainer at both the school and state level. I’ve previewed teacher-focused AI training with OpenAI and explored collaborative workspaces that let teachers design lessons, validate resources and share best practices. These tools are powerful, but tools alone don’t solve the real problem.
The real crisis is confidence.
Every day, I see students who can generate polished work in seconds increasingly doubt their own ability to think. AI can cognitively offload busywork, but without guidance, it also offloads struggle. And struggle is where learning lives. Students can appear academically successful while fundamentally misunderstanding both the content and the process that produced it. Even worse, many are never taught to question the accuracy, bias or missing nuance in AI outputs. Teachers are left grading what many now call “AI garbage,” unteaching misconceptions and watching motivation erode.
Classroom Assessments Need To Support Critical Thinking
The answer is not banning AI. It’s redesigning assessment.
Highly effective teachers are already doing this. They ask students to rewrite an AI-generated poem in a new voice and explain the choices they made. Teachers have AI create a science mind map, then task students with identifying weak links, false claims or missing evidence. In art and design, students are asked to critique why AI failed to match their vision, and what that failure reveals. These assessments reward discernment and balance.
We must favor unplugged assessments, debates, live reasoning and process-based work. We must assess empathy, creativity and the ability to reject flawed outputs, not just produce polished ones.
RELATED: Three tips for an AI strategy in K–12 districts.
Ironically, ChatGPT or generative tools that schools, districts or states approve can help teachers do this better. Educators can now create custom tutors, vetted agents and collaborative workspaces built from trusted materials. AI canvases and code tools allow teachers to build interactive sites, games, timelines and simulations. This saves time and money and expands access to under-resourced schools. When teachers control the inputs, AI becomes an amplifier of expertise, not a replacement for it.
Across the country, educators are already collaborating to create thoughtful guidelines that protect student privacy, empower teachers and include parents in the conversation. This work matters, and it must be supported, not sidelined.
SUBSCRIBE: Sign up to get the latest EdTech content delivered to your inbox weekly.
