Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Jan 05 2026
Artificial Intelligence

AI in Higher Education: Protecting Student Data Privacy

As artificial intelligence adoption accelerates across campuses, higher education leaders must balance innovation with data privacy, security and governance.

While artificial intelligence has long supported higher education research initiatives, it’s now expanding into areas that touch student data. Examples include face recognition in surveillance cameras and personal information stored in learning management systems.

As vetted vendors enable AI features in new, free tools that are marketed to faculty and students, IT departments face mounting pressure to keep private information and sensitive data secure.

Click the banner below for expert guidance on incorporating artificial intelligence in higher education.

 

Types of Data Collected and Processed in Higher Education

The data and systems that use AI will vary among institutions and departments, says Jay James, cybersecurity operations manager at Auburn University. Faculty may use AI-driven tools for research, grading assistance, or validating student-produced work — all of which pose their own cybersecurity risks.

Even with policies in place, faculty, staff or students may bypass institutional controls by signing up for free AI tools, potentially exposing protected university data. These technologies are also increasingly embedded in admissions and recruitment workflows, explains Justin Miller, associate professor of practice in cyber studies at the University of Tulsa.

“AI-driven platforms routinely gather the obvious data — assignments, grades, application materials — but they also capture a layer of behavioral metadata that most people never notice,” explains Miller, who is also a retired senior special agent with the U.S. Secret Service.

For example, recruitment platforms may track the behavior pattern of prospective students, such as how long they linger on admission requirements webpages or how frequently they visit a page with application deadlines, says Miller.

“This invisible data helps build a predictive profile.”

Many AI use cases rely on aggregated data and build a summary profile of a student type rather than tracking an individual person. However, student data is still being collected in multiple systems, some of which may be unknown to university cybersecurity teams.

WATCH: Industry experts discuss the top AI trends in 2026.

Compliance Challenges: Navigating FERPA and Biometric Privacy Laws

Higher education can often be a minefield of laws and regulations to follow, such as contract requirements from the federal government, biometric privacy laws and recording consent laws at the state level, and the Family Educational Rights and Privacy Act.

“Universities operate in a complex legal landscape,” says Miller, who recommends establishing an AI committee that brings together stakeholders from IT, legal, HR and academic leadership. Close collaboration between IT and legal is paramount for navigating laws, as they can evolve swiftly. This is especially true of collecting and storing biometric data, which is explicitly banned in certain states.

The committee should also be responsible for developing an AI security framework. As Miller describes it, the framework should include “clear standards for what data is collected, how it is used and what third-party vendors are permitted to do with it.”

James echoes Miller’s recommendation, noting that AI impacts almost every institutional function.

“AI is a technology that touches research, instruction, student access, administrative operations and cybersecurity,” he says. “The institutions that approach AI as a cross-functional effort rather than a technology department initiative will be in a better position to be successful.”

Justin Miller
The most overlooked issue is ensuring that student data is not used to train external AI models.”

Justin Miller Associate Professor of Practice in Cyber Studies, University of Tulsa

Cyberthreats and Data Breaches: Risks Facing Student Data Security

The role of the university chief information officer is not an enviable one lately. While traditional, trusted cybersecurity measures should be the norm, AI introduces new layers of risk and complexities. However, one non-negotiable should be investigating how data is used and stored by AI tools.

“AI is creating a revolution in data management, and universities need to build strong guardrails to ensure that student data is collected and stored securely. That begins with policy and governance,” explains Miller.

Miller says that IT should ask three critical questions of any AI vendor:

  • Will student data be used to train public models?
  • How long is the data retained?
  • Who owns the insights and outputs generated from the data?

RELATED: Identity and access management’s role is evolving in the era of AI.

Vendors should never be allowed to use student data for model training, nor should vendors retain that data.

“The most overlooked issue is ensuring that student data is not used to train external AI models,” says Miller. “Data should be completely segregated and retained for the shortest possible period of time, so it never enhances the vendor’s external model. Institutions should require minimal retention, preferably deletion immediately after the session. And ownership must remain with the university and the student.”

Some higher education institutions are addressing this risk by maintaining local control of AI infrastructure. For example, the University of Montana worked with Amazon Web Services and Microsoft to create a university-owned data center that runs enterprise-grade models directly from these companies. This system ensures all data is hosted in the university system and isn’t used to train any models.

“We’re doing what we can to control the situation,” says Zach Rossmiller, associate vice president and CIO at the University of Montana.

SUBSCRIBE: Sign up to get the latest EdTech content delivered to your inbox weekly.

 

Rossmiller and his team noticed that additional challenges tend to arise when approved vendors begin to offer new AI features that haven’t been vetted. In those instances, features must be disabled until a secondary vetting review is completed to understand how data is being handled.

These conversations can be challenging to have with faculty, staff and students who are eager to adopt technology and efficiencies into their lives. Rossmiller has found that once he and his team begin to explain the “why” behind their actions — how the AI models work, what the risks are and how the university plans to keep their data and personal information secure — they’re met with gratitude and understanding.

“This is not just an IT problem; it’s a shared responsibility across the institution to ask hard questions about AI privacy, governance and risk before we turn features on. Our students deserve nothing less,” he says. “AI is moving fast, but our responsibility stays the same: Protect our students’ information and maintain the trust they place in us. Every decision we make on AI features is anchored in that responsibility.”

EXPLORE: AI Is reshaping cloud strategy and governance in higher education.

Best Practices for Protecting Data Used by AI

Beyond the recommendations already outlined, a resounding recommendation from James, Miller and Rossmiller is communicating with faculty, staff and students about the university’s data collection policies.

“Students don’t want to be surprised by how AI uses their information, and they certainly don’t want to receive yet another breach notification telling them that their data has been exposed, and that they’re being offered free credit monitoring,” says Miller. “We need clear expectations about what data is collected, why it’s collected, how long it’s retained and how it’s protected to build trust.”

Miller recommends building an AI transparency page on the institution’s website to explain these very details.

“You’re simply giving students clear information without expanding your attack surface,” explains Miller. “Clear communication builds trust, shows respect for the students’ information, and reduces the likelihood of misunderstandings or legal challenges.”

Rossmiller hopes for a day when AI literacy is fully entrenched in the higher education curriculum, including faculty training. When students and faculty understand how AI works, it’s easier to understand the risks associated with the tool.

“Higher education is positioned to help lead the charge,” says Rossmiller.

pixdeluxe/Getty Images