Artificial intelligence is hitting universities, but it doesn’t mean professors are being replaced by computers.
“So when we talk about AI, we imagine robots, we imagine science fiction, we imagine Skynet overthrowing the world. These are the things that we imagine, but the reality is that it’s not nearly that sexy,” said Kyle Bowen, the educational technology services director at Penn State, during EdSurge Live’s town hall on AI.
“The reality is that some of the really interesting applications of this are people and computers working together to think about or to explore different problems or ideas,” Bowen added.
Much like Microsoft’s Anthony Salcito, Bowen and other higher education influencers touted AI’s ability to make data analytics and student success initiatives even easier by drawing out the most actionable data.
Here are some key takeaways from EdSurge Live’s two-part video series:
Data Should Empower Students
Candace Thille, an assistant professor of education at Stanford Graduate School of Education, told EdSurge viewers that data pulled from student work should be used to craft personalized learning experiences.
When schools use AI-powered online environments, leaders should use their predictive modeling to create a trajectory for learning that can be shared with the students to support the creation of a learning path.
Transparent Algorithms Support Success
To ensure widespread success in this use of AI, the precise models and algorithms that universities use can’t be locked up in proprietary systems. They must be challengeable, peer-reviewable and transparent, so that pedagogical decision-making is being made by the people who usually make those decisions, Thille said.
“To just say ‘Trust us. Our algorithms work,’ I would argue that that's alchemy, that's not science,” Thille added.
Transparency allows educators and advisers to understand what the system is telling them. In an effort to support clear decision-making, Mark Milliron, the co-founder and chief learning officer at Civitas Learning, said that his company seeks to outline the predictors used in each algorithm.
“We literally will list the most powerful predictors and their relative score and power in the model, so the people are clear this is why the trajectory is showing the trajectory it is, so people can understand which variables are impacting that,” Milliron said.
Understand that AI Can Still Have Human Bias
Since predictive systems often seem like AI simply dishing out a conclusion, Thille said educators or advisers working with the data might take it as objective or true on face value.
However, she recommended that leaders remind educators that there is a person who wrote the algorithm necessary for the computation, so there still may be bias.
One way to ensure this bias is decreased, Thille said, is to use extremely representative data.
“The goal here is going to be figuring out which of those decisions we give to humans to make, and which ones we can let a system autonomously make,” she said.