Do IT Leaders Understand How AI Works in K–12 Ed Tech?
The first consideration leaders must make is whether they understand the tools in which they’re investing. IT departments should know what the tools are doing behind the scenes to produce the results they see.
“There’s definitely a lack of understanding about what these systems are, how they’re implemented, who they’re for and what they’re used for,” singh says.
In understanding how AI works, IT leaders must remember that these systems rely on data.
“AI only works if you grab data. It only works if you grab data from everywhere you can find it, and the more data the better,” says Valerie Steeves, a professor in the department of criminology at the University of Ottawa and principal investigator of the eQuality project.
Because AI and ML tools rely on data, admins must ensure they’re building in student data protections to use this technology ethically. “AI that’s rolling out now always comes with a price tag, and the price tag is your students’ data,” Steeves says.
Do School Leaders Account for AI Tech’s Bias?
Another ethical consideration is algorithmic bias in AI and ML technologies. When purchasing these solutions, it’s important to remember that these programs are operating off data sets that frequently contain bias.
“There’s a lot of bias involved in applying some of these tools,” singh says. “A lot of these tools are made by specific people and with specific populations in mind. At a basic level, there’s racial, gender, sexual orientation differences — there’s a lot of different kinds of people — and a lot of these technologies either leave them out or include them in ways that are really harmful.”
While the creators of these tools typically aren’t intending to cause harm, the biases built into the data sets can discriminate against certain student populations.