1. Develop a Clear Policy for AI Use with University Resources
This is needed to inform a university’s community on what users should rely on and what they should stay away from, Galvan says. According to the 2025 EDUCAUSE AI Landscape Study, 39% of respondents say that their university has an AI use policy in place.
Galvan highlights how the University of Michigan approached its AI strategy.
“The university has deployed a suite of AI services grounded in four key principles: privacy, security, accessibility and equitable access. These guiding considerations are embedded in system design and configuration, data governance practices, and contractual agreements with third-party service providers. This thoughtful approach has played a critical role in driving adoption and building trust across the campus community,” Galvan says.
2. Continuous Education and Training of Students and Staff
According to EDUCAUSE’s 2024 Cybersecurity and Privacy Horizon Report, “phishing emails are becoming a growing threat to students,” Galvan notes.
One effective step higher education institutions can take is to teach their students how to recognize and report suspicious emails, empowering them to protect the entire campus, he adds. Additionally, all users need to treat emails with links or attachments as potentially suspicious, even if they are from someone familiar.
RELATED: Learn five ways to implement artificial intelligence effectively into faculty work.
The report also recommended that higher education institutions invest in comprehensive, student-focused cybersecurity and privacy awareness training, while also embedding these topics across academic curricula.
“These efforts should address not only how students use technology on campus but also extend to off-campus activities, including the use of personal devices and platforms like social media,” Galvan says.
3. Improve the Organizational Approach to IAM
Last year, malicious actors mimicked a video conference call using deepfake technology to trick a worker into sending them millions of dollars. There was also the short-lived WormGPT, an LLM created for hackers, and the unknown copycat tools still out there.
A 2025 CrowdStrike AI security survey found that 80% of security teams prefer generative AI delivered through a platform instead of a point solution. It also found that 64% of respondents are either researching generative AI tools or have already purchased one.
As the adoption of generative AI tools becomes reality for many universities and colleges, Galvan notes, it is important to manage how they’re being used and by whom.
“To counter the growing threat of AI-driven attacks, cybersecurity and risk management leaders must invest in advanced identity and access management solutions, particularly technologies capable of verifying genuine human presence and distinguishing it from automated or malicious activity,” Galvan says. “It’s also essential to validate claims made in audio or video messages through alternative, trusted communication channels.”