Apr 20 2026
Artificial Intelligence

CoSN 2026: How K–12 Districts Are Tackling Responsible AI Adoption

School IT leaders are developing practical policies to ensure artificial intelligence is used ethically and equitably.

Over the past few years, the artificial intelligence conversation in K–12 districts has moved from one of fear to one of curiosity and interest. Now that districts have realized the potential of AI as a learning and productivity tool, they are increasingly seeking to craft policies and guidance that address all facets of this evolving technology. 

At the 2026 CoSN Annual Conference in Chicago, IT leaders from K–12 districts shared how they are formulating their AI policies and guardrails to ensure responsible adoption.

AI Guidelines Mirror District Values

When ChatGPT was released in 2022, everyone got access to it at the same time, which means teachers and administrators had no time to get ahead of student use. 

Click the banner below to explore tech solutions inspired by CoSN insights.

 

In Alexandria City Public Schools in Virginia, CIO Emily Dillard and her colleagues made a deliberate decision not to rush into a stand-alone AI policy. Instead, they developed a set of guiding principles for AI that sit alongside existing academic honesty and acceptable use policies. Those principles emphasize teaching and learning, human‑centered design, data privacy and security, transparency, respect, and continuous improvement. They also draw a clear line on AI’s role in assessments.

“Our parents want to know that we have professionals in each classroom who know their child,” Dillard said. “AI will never understand the nuances that a teacher does.” 

AI may help teachers work more efficiently, but human educators are still making the decisions that matter most for students’ progress, she said.

“In 2023 we wrote the phrase, ‘Evaluate technologies with curiosity and skepticism,’ and for us, that’s really been our driving force for how we think about AI work,” Dillard said. “It's really given us permission to go slow to go fast.”

DISCOVER: Read the latest artificial intelligence research report from CDW.

What Responsible AI Looks Like in Practice

At Niles Township High School District 219, CTO Phil Hintz and his team adapted a stoplight framework to help teachers communicate expectations around AI for each assignment. Using Auguste Rodin’s The Thinker as a visual, they created three levels: red, yellow and green.

A red thinker means no AI use is allowed; using it is treated the same way as cheating. Yellow indicates that students may use AI but must cite it and share their prompts. Green signals that AI is required, as the student work is designed to be impossible without some form of AI assistance.

The icons show up in the learning management system and on posters in every classroom, and departments across subjects have developed assignments at each level. 

“We know students are going to use it anyway,” Hintz said. “So, we may as well have something in place that will help guide them.”

READ MORE: Training is the key to success with Google Gemini.

Shad McGaha, CTO for Belton Independent School District in Texas, said that this kind of granular guidance also changes how teachers evaluate student work. 

“You have to train teachers to stop looking only at the final product,” he said. “If they go back and look at all the iterations, they can see whether students just copied everything in at once or if they’ve really been working on it.”

On the back end, some districts are turning to content filter platforms that now include AI chat features. In Alexandria, when students try to access open tools such as ChatGPT, their requests are redirected into Securly’s AI chat, which has been configured to refuse certain requests, such as full essay generation. Instead, it will offer suggestions to help students think through structure and brainstorming. Administrators have access to logs and transcripts to see how students are actually using AI during the school day. 

Click the banner below to sign up for our weekly newsletter.

 

Address Equity Concerns With AI Adoption

AI threatens to expose a new digital divide. Hintz described an instance when he received false information using the free version of an AI model and different, correct information when he started paying for the service. This, he said, could be detrimental for students without the resources to pay for AI.

“We’ve been trying to conquer the digital divide of students having access to devices, and we’re doing a pretty good job on that,” Hintz said. “Same thing with access to the internet. Now, the new digital divide is that students who can afford the AI are going to have a different set of information than students who cannot afford the AI. That’s going to be an information divide, and that’s really scary.”

Some districts are trying to use their purchasing power to counter that divide. By buying enterprise licenses for tools such as Google Gemini, they are ensuring that every student has access to at least one safe AI environment. 

Equity also lives in how AI use is taught. 

“You have to think about the AI agency,” McGaha said. “I think students in affluent areas are taught more about how to use AI as a collaborator, while students in lower‑resourced areas are using it more for remediation. We must ensure all of our students in our district learn how to steer the AI, not just follow it.”

To ensure you don’t miss a moment of CoSN event coverage, keep this page bookmarked and subscribe to our newsletter to get all of our articles sent to your inbox.

mikimad/Getty Images
Close

New Research from CDW on Workplace Friction

Learn how IT leaders are working to build a frictionless enterprise.