In Alexandria City Public Schools in Virginia, CIO Emily Dillard and her colleagues made a deliberate decision not to rush into a stand-alone AI policy. Instead, they developed a set of guiding principles for AI that sit alongside existing academic honesty and acceptable use policies. Those principles emphasize teaching and learning, human‑centered design, data privacy and security, transparency, respect, and continuous improvement. They also draw a clear line on AI’s role in assessments.
“Our parents want to know that we have professionals in each classroom who know their child,” Dillard said. “AI will never understand the nuances that a teacher does.”
AI may help teachers work more efficiently, but human educators are still making the decisions that matter most for students’ progress, she said.
“In 2023 we wrote the phrase, ‘Evaluate technologies with curiosity and skepticism,’ and for us, that’s really been our driving force for how we think about AI work,” Dillard said. “It's really given us permission to go slow to go fast.”
DISCOVER: Read the latest artificial intelligence research report from CDW.
What Responsible AI Looks Like in Practice
At Niles Township High School District 219, CTO Phil Hintz and his team adapted a stoplight framework to help teachers communicate expectations around AI for each assignment. Using Auguste Rodin’s The Thinker as a visual, they created three levels: red, yellow and green.
A red thinker means no AI use is allowed; using it is treated the same way as cheating. Yellow indicates that students may use AI but must cite it and share their prompts. Green signals that AI is required, as the student work is designed to be impossible without some form of AI assistance.
The icons show up in the learning management system and on posters in every classroom, and departments across subjects have developed assignments at each level.
“We know students are going to use it anyway,” Hintz said. “So, we may as well have something in place that will help guide them.”
READ MORE: Training is the key to success with Google Gemini.
Shad McGaha, CTO for Belton Independent School District in Texas, said that this kind of granular guidance also changes how teachers evaluate student work.
“You have to train teachers to stop looking only at the final product,” he said. “If they go back and look at all the iterations, they can see whether students just copied everything in at once or if they’ve really been working on it.”
On the back end, some districts are turning to content filter platforms that now include AI chat features. In Alexandria, when students try to access open tools such as ChatGPT, their requests are redirected into Securly’s AI chat, which has been configured to refuse certain requests, such as full essay generation. Instead, it will offer suggestions to help students think through structure and brainstorming. Administrators have access to logs and transcripts to see how students are actually using AI during the school day.
Click the banner below to sign up for our weekly newsletter.
