Close

See How IT Leaders Are Tackling AI Challenges and Opportunities

New research from CDW reveals insights from AI experts and IT leaders.

Apr 29 2025
Security

3 Tips for Improving Security Amid the Growth of Generative AI

How can higher education institutions tackle growing concerns about the use of generative artificial intelligence for cyberattacks?

Many people in the higher education sector, from administrative staff to students, are becoming more familiar with generative artificial intelligence tools such as ChatGPT, Microsoft Copilot and Google Gemini. They’re using these tools to help summarize meeting notes or draft emails in support of completing daily tasks.

While such programs can be beneficial for the average user, there are growing concerns about how they can be used negatively, including how large language models (LLMs) can power cyberattacks. As AI models improve, so too do the quality of phishing emails and falsified imagery. Earlier this year, Google Cloud released a report on generative AI being used for malicious purposes.

“AI is constantly evolving, and new tools are continually becoming available. But new does not always mean good, so these tools will need to be researched and vetted to ensure they are not linked to cybercriminal activity and can be trusted,” says Isaac Galvan, community program director for cybersecurity and privacy at EDUCAUSE.

He points to three key areas for universities and colleges to focus on to better protect their communities.

Click the banner to learn how higher ed institutions can incorporate AI efficiently and effectively.

 

1. Develop a Clear Policy for AI Use with University Resources

This is needed to inform a university’s community on what users should rely on and what they should stay away from, Galvan says. According to the 2025 EDUCAUSE AI Landscape Study, 39% of respondents say that their university has an AI use policy in place.

Galvan highlights how the University of Michigan approached its AI strategy.

“The university has deployed a suite of AI services grounded in four key principles: privacy, security, accessibility and equitable access. These guiding considerations are embedded in system design and configuration, data governance practices, and contractual agreements with third-party service providers. This thoughtful approach has played a critical role in driving adoption and building trust across the campus community,” Galvan says. 

2. Continuous Education and Training of Students and Staff

According to EDUCAUSE’s 2024 Cybersecurity and Privacy Horizon Report, “phishing emails are becoming a growing threat to students,” Galvan notes.

One effective step higher education institutions can take is to teach their students how to recognize and report suspicious emails, empowering them to protect the entire campus, he adds. Additionally, all users need to treat emails with links or attachments as potentially suspicious, even if they are from someone familiar.

RELATED: Learn five ways to implement artificial intelligence effectively into faculty work.

The report also recommended that higher education institutions invest in comprehensive, student-focused cybersecurity and privacy awareness training, while also embedding these topics across academic curricula.

“These efforts should address not only how students use technology on campus but also extend to off-campus activities, including the use of personal devices and platforms like social media,” Galvan says.

3. Improve the Organizational Approach to IAM

Last year, malicious actors mimicked a video conference call using deepfake technology to trick a worker into sending them millions of dollars. There was also the short-lived WormGPT, an LLM created for hackers, and the unknown copycat tools still out there.

A 2025 CrowdStrike AI security survey found that 80% of security teams prefer generative AI delivered through a platform instead of a point solution. It also found that 64% of respondents are either researching generative AI tools or have already purchased one.

As the adoption of generative AI tools becomes reality for many universities and colleges, Galvan notes, it is important to manage how they’re being used and by whom.

“To counter the growing threat of AI-driven attacks, cybersecurity and risk management leaders must invest in advanced identity and access management solutions, particularly technologies capable of verifying genuine human presence and distinguishing it from automated or malicious activity,” Galvan says. “It’s also essential to validate claims made in audio or video messages through alternative, trusted communication channels.”

Just_Super/Getty Images