Close

See How Your Peers Are Moving Forward in the Cloud

New research from CDW can help you build on your success and take the next step.

Mar 18 2025
Artificial Intelligence

AI Ethics in Higher Education: How Schools Are Proceeding

Higher education institutions are uniquely positioned to evaluate AI ethics and explore safeguards to promote responsible use at colleges and universities.

When it comes to the ethical issues raised by artificial intelligence, colleges and universities are out in front.

Many schools already look to AI to support teaching, learning, research, back-office functions and even physical security. At the same time, they’re being proactive and thoughtful in asking important questions about the responsible use of this powerful technology.

Click the banner below to learn how artificial intelligence can transform your campus.

 

Why AI Ethics Is Unique in Higher Education

Higher education is uniquely positioned to deal with AI’s ethical considerations, partly because AI adoption is already prevalent in academia.

At Miami University in Ohio, “there are courses about AI, and there are courses that use AI,” says Vice President for IT Services and CIO David Seidl. As AI use widens, colleges and universities need to give students “an ethical foundation, a conceptual foundation to prepare them for the future,” he says.

Many schools have the institutional expertise on campus needed to lay that foundation. “We have people who are very thoughtful, who bring subject matter expertise from a lot of lenses, so that you can have well-informed conversations about the ethics of AI,” says Tom Andriola, University of California, Irvine’s vice chancellor for IT and data.

Given higher education’s access and use of the technology, and the experts staffed at these institutions, the scene is already set for conversations about AI’s ethical quandaries. At many colleges and universities, those discussions have already commenced.

RELATED: Use these key strategies to adopt Gemini effectively.

Key Ethical Concerns Surrounding AI in Academia

At UC San Diego, CIO Vince Kellen says the top ethical issue with AI is democratization; specifically, the ability to access AI through an intellectual lens.

“Those who exert critical reasoning in using AI get a bigger benefit,” he says. “Those who do not get a lesser benefit.”

Universities have an ethical imperative to teach critical-thinking skills, and this dovetails with concerns about AI accuracy.

For example, you can ask AI, how do you keep cheese on pizza? “And it says: ‘Glue is a great way to keep cheese on pizza,’” Seidl says. “The ethical concern there is in giving that answer to individuals who may or may not be good at assessing the quality of that response.”

Privacy ranks high for Michael Butcher, assistant vice president for student affairs and dean of students at the College of Coastal Georgia, where he also co-chairs the AI task force.

“Folks don’t yet fully understand what happens when they input their data into an institutionally supported or a noninstitutionally supported AI application,” he says. Given the nature of academic data — from personal information to sensitive research — privacy becomes a significant ethical consideration.

EXPLORE: IT-friendly audio solutions can solve higher ed tech issues.

Bias is another concern. Ask AI to create a picture of a nurse, and it will likely draw a woman, because it’s been trained on data that reflects “long-term biases that exist in society,” Seidl says. “What are we inadvertently doing by having AI continue to perpetuate those things?”

There are also questions about academic integrity and the risk that users may lean too heavily into AI. Higher education needs to consider “where legitimate academic assistance ends and where unethical dependence begins,” Butcher says.

Given the various ethical gray areas, higher education is being challenged to establish guard rails in these early days of AI adoption.

SUBSCRIBE: Sign up for the EdTech newsletter for weekly updates.

 

What AI Is Not Ready to Do

“Don’t let AI take charge of anything that involves human health or safety,” Seidl says. It can support cancer research, making sense of data or summarizing medical charts, “but you want to avoid having it make decisions without a human in the loop.”

“We wouldn’t want to be doing mental health counseling with an AI chat bot,” he added. “We would never want an unexpected interaction there.” And in the classroom, while AI can help evaluate student work, “it probably shouldn’t be doing grading by itself.”

The same goes for academic advising and other key functions. For example, “you might use it in the background to help understand what the permutations around a financial aid package are, but you wouldn’t have it finalize the package,” Andriola says.

In the back office, too, Kellen wants eyes on AI outputs. “Business processes, payroll processes, HR processes, financial processes — for now, keep a human in the loop,” he says. “This is a probabilistic tool. It shouldn’t be making decisions.”

Vince Kellen
When we prioritize our content to the large language model, the bias now gets shifted to the bias in our own documents, which we can control. That’s a good thing.”

Vince Kellen CIO, UC San Diego

The IT Department’s Role in Shaping Ethical AI Development

Technology leaders can play a vital role in shaping ethical uses of AI on campus.

“I am a convener of a lot of conversations, including about AI ethics,” Andriola says. He has organized workshops and webinars to discuss “how AI is changing our thinking about what we do in the classroom, how we conduct our research, how it will impact our role as we continue to support our communities and our society.”

IT departments can also ensure the tools meet certain ethical standards. Kellen, for example, uses university data to train TritonGPT, an AI tool developed in-house at UC San Diego.

“When we prioritize our content to the large language model, the bias now gets shifted to the bias in our own documents, which we can control. That’s a good thing,” he says.

 The IT team can also influence what AI tools are sanctioned on campus, which in turn empowers ethical oversight, Seidl says: “Whenever Google brings out a new capability, we ask, should we turn it on? Does it have risk? Do we need to do a pilot or a beta to make sure that we understand it well?”

IT leaders can continue to leverage expertise on campus as they ask these types of questions and apply ethical considerations in vetting AI tools. This collaboration between academics and IT will encourage colleges and universities to use AI’s increasingly powerful capabilities responsibly and ethically.

BlackJack3D | Getty Images