CDW’s workshops also address the guardrails that urge users to analyze data before it’s added into a model. This data governance practice can reveal whether data needs to be modified to make it appropriate for the model to answer questions, Johnson says.
“Most of the conversation is about how to protect your own data and prevent it from being part of the training set,” Johnson says. “That’s the lion’s share of the conversation, but often some conversations about guardrails and prompting come up as well.”
AI Security Guidelines for K–12 Schools
Here are some suggestions to help districts use AI securely in their schools.
Deploy a Private Instance
To secure AI in public schools, Johnson recommends using an in-house solution that lets students, faculty and staff experiment with an AI chat app without exposing data in the public sphere. Schools can also work with a public model that has the right privacy protections in place.
“All of the big three hyperscalers — Amazon, Microsoft and Google — have in their data privacy agreements that they will not use any of your prompt content to retrain models,” Johnson says. “In that way, you’re protected even if you don’t have that AI program on-premises. If you use it in the cloud, the data privacy agreement guarantees that they won’t use your data to retrain models.”
Establish an Action Plan
An action plan should detail what to do if a data breach occurs or if a mass phishing email circulates in a financial fraud attempt.
“It’s incredibly important for IT professionals to understand exactly what those new attack surfaces are and what they look like, and then start building a framework for addressing that,” Hawking says. “That includes everything — the hardware, software and actual IT architecture — but also policies and regulations in place at each school to address these issues.”
Take Small Steps
As schools experiment with AI, they start small. For example, they can use data to analyze test scores or create progress reports for parents without exposing all data to an AI program at once, Johnson advises.
“Don’t take your entire data estate and make it available to some AI bot. Instead, be very prescriptive about what problems you are trying to solve,” he says.
RELATED: Evaluate AI use cases strategically to achieve the best outcomes.
Use Corporate Accounts With AI Tools
Hawking warns against using personal email accounts with ed tech providers to avoid creating entry points for data sharing that could be used to train models without consent.
Vet AI Tools
Hawking also recommends that schools create an oversight team to vet AI tools. The team could include stakeholders such as the IT department, parents and student government leaders.
“It doesn’t mean lock down all AI, but understand exactly what’s being used on campus and why it’s being used,” Hawking says.
Conduct a Complete Risk Assessment and Full Audit
A thorough risk assessment allows schools to identify regulatory compliance risks and develop policies and procedures for the use of generative AI.
Hawking notes the difference in using a little chatbot and more powerful large language models inside classrooms for tasks such as determining a student’s grade.
“It’s really important, as part of an AI audit, to get a proper overview how all of those things take place on campus,” Hawking says. “That is the starting point of good governance.”