Types of Data Collected and Processed in Higher Education
The data and systems that use AI will vary among institutions and departments, says Jay James, cybersecurity operations manager at Auburn University. Faculty may use AI-driven tools for research, grading assistance, or validating student-produced work — all of which pose their own cybersecurity risks.
Even with policies in place, faculty, staff or students may bypass institutional controls by signing up for free AI tools, potentially exposing protected university data. These technologies are also increasingly embedded in admissions and recruitment workflows, explains Justin Miller, associate professor of practice in cyber studies at the University of Tulsa.
“AI-driven platforms routinely gather the obvious data — assignments, grades, application materials — but they also capture a layer of behavioral metadata that most people never notice,” explains Miller, who is also a retired senior special agent with the U.S. Secret Service.
For example, recruitment platforms may track the behavior pattern of prospective students, such as how long they linger on admission requirements webpages or how frequently they visit a page with application deadlines, says Miller.
“This invisible data helps build a predictive profile.”
Many AI use cases rely on aggregated data and build a summary profile of a student type rather than tracking an individual person. However, student data is still being collected in multiple systems, some of which may be unknown to university cybersecurity teams.
WATCH: Industry experts discuss the top AI trends in 2026.
Compliance Challenges: Navigating FERPA and Biometric Privacy Laws
Higher education can often be a minefield of laws and regulations to follow, such as contract requirements from the federal government, biometric privacy laws and recording consent laws at the state level, and the Family Educational Rights and Privacy Act.
“Universities operate in a complex legal landscape,” says Miller, who recommends establishing an AI committee that brings together stakeholders from IT, legal, HR and academic leadership. Close collaboration between IT and legal is paramount for navigating laws, as they can evolve swiftly. This is especially true of collecting and storing biometric data, which is explicitly banned in certain states.
The committee should also be responsible for developing an AI security framework. As Miller describes it, the framework should include “clear standards for what data is collected, how it is used and what third-party vendors are permitted to do with it.”
James echoes Miller’s recommendation, noting that AI impacts almost every institutional function.
“AI is a technology that touches research, instruction, student access, administrative operations and cybersecurity,” he says. “The institutions that approach AI as a cross-functional effort rather than a technology department initiative will be in a better position to be successful.”
