How AI Has Supercharged Phishing Against Education
Thanks to generative AI, attackers are now able to change domains slightly and the message slightly, endlessly churning out near-duplicates of the same scam.
“It’s just similar enough that we can definitely feel the pain, but it's different enough that the automation that we have in place cannot just find those and rip them out,” Syn says.
What’s even worse, he says, is that because AI is now so good at OSINT — “going out and searching the internet for any kind of data it can find” — it can pull in personal and school-specific details and “spearfish you and a million other people.”
Attackers are impersonating institutional leaders using real details lifted from university websites and public communications, says Clark.
“The message can reference a real meeting or deadline to create urgency while attaching a malicious document to perform further compromise,” he says. “Because AI matches the tone and writing style perfectly, it feels routine, not suspicious, and that’s how people get tricked.”
Clark says AI-driven phishing is breaking the old playbook for security awareness.
WATCH: Four security trends to be mindful of in 2026.
“We trained people to look for bad grammar, strange formatting and other obvious red flags,” he notes, but “AI removed all of that.” Today’s phishing emails “look normal and relatable,” which means even staff who’ve passed every training module “can unexpectedly fall for a phishing attack.”
Why Education Is More Vulnerable to AI-Enabled Phishing
The culture of openness in higher education makes them especially vulnerable to AI phishing, experts say.
“Schools are built on trust and openness, and attackers take advantage of that,” says Clark. “You’ve got large, diverse user populations, constant turnover, shared devices and a lot of public information available onsite and online about staff and operations. When you combine that with limited IT resources and decentralized environments, schools become honeypots for AI-driven social engineering.”
That expansive, decentralized digital landscape, Syn warns, makes schools “such a vulnerable target” because bad actors can easily find a foothold and work their way in.
According to Syn, trying to simply “outsmart” AI with more technology is bound to fail.
“If we just try the stalemate of ‘Can we one-up the AI?’ we’re probably going to lose,” Syn says.
READ MORE: Five ways to boost cybersecurity maturity in higher education.
Instead, he argues, institutions need to lean into the things that AI can’t fake: human-only trust signals and real-world verification.
“We have to find uniquely human things that these AIs and bad actors can't know,” Syn says. “That's the best defense, having that personalized interconnection knowledge that no one can spoof.”
In practice, that can be as simple as a shared pass phrase between colleagues — a unique phrase that doesn’t appear anywhere online. If a strange request suddenly appears “from” the university president, staff can ask for that phrase before taking another step.
Just as important, Syn says, is going “out of band.” If a voice message or email seems unusual, call the person back on a known good number or talk to them in real life. That simple step is what enabled a Utah Valley University student to discover that a near-perfect email that she thought was from her professor was actually generated by AI.
If human verification is one pillar, layered technical defenses are the other, says Clark.
“User awareness training is a good place to start, but it can’t carry the load anymore,” he says. “To keep up with more sophisticated attacks, organizations need layered defenses that combine strong identity protection, continuous risk-based access controls and advanced email and endpoint detection.”
What IT Leaders Can Do Now
For IT leaders in higher ed, the experts outline a pragmatic roadmap that starts with identity, then layers on controls and culture.
“First, institutions need to focus on locking down identity,” says Clark. “Then, they should ensure strong multifactor authentication is in place anywhere and everywhere it is applicable.” IT leaders also need to assume that some phishing attacks will get through, he says, “and ensure there are strong prevention, detection and response controls in place.”
DISCOVER: Identity and access management improves an institution’s security posture.
Syn is equally emphatic about multifactor authentication. “There's a reason that banks give us a physical card and then make us memorize a pin. MFA just works, and it has worked for decades, and we should use it everywhere.”
Syn is also enthusiastic about hardware security keys, like YubiKeys from Yubico. “It's such a safe thing because you have a physical token that a bad actor can't replicate, so I swear by that.”
Clark says institutions also need to set clear guardrails around AI use, “including what tools are approved and what data, especially personally identifiable information, should never be entered into AI systems.” He says many AI tools are loosely governed or not internally controlled, which increases the risk of data leakage and supply chain risk.
“Additionally, every new AI platform introduces new identities, access paths and third parties that attackers can exploit or impersonate,” Clark says.
Syn says it’s worth keeping in mind that social engineering “dates back to biblical times” and is not something that will ever be completely solved.
“We have to be aware that this is not something we can cure,” Syn says. “We have to constantly look out for it, and ask, ‘How can we prevent this? What is it we can do to keep ourselves and our families safe?’”
