Feb 18 2026
Security

AI-Driven Phishing Is Putting K–12 Schools at Risk

As generative artificial intelligence supercharges phishing and deepfakes, schools must adapt to protect a culture built on openness and trust.

Phishing has long been the most common entry point for cyberattacks on schools, but experts say generative artificial intelligence has fundamentally changed that threat.

“AI has dramatically increased both the speed and scale of attacks,” says Cory Clark, vice president of threat operations and managed security services at SonicWall. “Phishing messages that used to be sloppy and easy to spot can now be tailored, timely and written in a way that feels completely legitimate.”

Ben Syn, director of university and career education at KnowBe4 says that thanks to AI, attackers  are able to “automate in hours what would take a bad actor weeks to put together.”

Cybersecurity experts say the education sector has become especially vulnerable to AI-enabled phishing, which requires IT leaders to rethink how to handle phishing attacks.

Click the banner below for deeper insight into the cybersecurity landscape.

 

How AI has Supercharged Phishing Against K–12 Schools

Thanks to generative AI, attackers are now able to change domains and the message slightly, endlessly churning out near-duplicates of the same scam.

“It’s just similar enough that we can definitely feel the pain, but it’s different enough that the automation that we have in place cannot just find those and rip them out,” Syn says.

What’s even worse, he says, is that because AI is now so good at open-source intelligence — “going out and searching the internet for any kind of data it can find”— it can pull in personal and school-specific details and “spearfish you and a million other people.”

Attackers are impersonating superintendents and principals using real details lifted from district websites and public communications, says Clark.

“The message can reference a real meeting or deadline to create urgency while attaching a malicious document to perform further compromise,” he says. “Because AI matches the tone and writing style perfectly, it feels routine, not suspicious, and that’s how people get tricked.”

Clark says AI-driven phishing is breaking the old playbook for security awareness.

“We trained people to look for bad grammar, strange formatting and obvious red flags,” he notes, but “AI removed all of that.” Today’s phishing emails “look normal and relatable,” which means even staff who’ve passed every training module “can unexpectedly fall for a phishing attack.”

ICYMI: Cybersecurity experts shared their top security insights at TCEA 2026. 

Why K–12 Education Is More Vulnerable to AI-Enabled Phishing

The culture of openness in K–12 education makes it especially vulnerable to AI phishing, experts say.

“Schools are built on trust and openness, and attackers take advantage of that,” says Clark. “You’ve got large, diverse user populations, constant turnover, shared devices and a lot of public information available onsite and online about staff and operations. When you combine that with limited IT resources and decentralized environments, schools become a honeypot for AI-driven social engineering.”

Syn agrees. “For teachers, the sharing of knowledge is a fundamental touchstone of education. They cannot not share. That’s what their whole reason for being is,” he says.

Unlike at hospitals, which are constrained by HIPAA and strict privacy rules, education prizes visibility and collaboration, he adds. “Everyone has their own devices, and they all want to share, to connect and to collaborate.”

That expansive, decentralized digital landscape, Syn warns, makes schools “such a vulnerable target” because bad actors can easily find a foothold and work their way in.

FLIP THE SCRIPT: Artificial intelligence-powered cybersecurity solutions can help K–12 schools.

To Combat AI Phishing, Lean In on Human-Only Trust Signals

According to Syn, trying to simply “outsmart” AI with more technology is bound to fail.

“If we just try the stalemate of ‘Can we one up the AI?’ we’re probably going to lose,” Syn says.

Instead, he argues, schools need to lean into the things that AI isn’t able to fake: human-only trust signals and real-world verification.

“We have to find uniquely human things that these AIs and bad actors can’t know,” Syn says. “That’s the best defense, is having that personalized interconnection knowledge that no one can spoof.”

In practice, that can be as simple as a shared pass phrase between colleagues — a unique phrase that doesn’t appear anywhere online. If a strange request suddenly appears “from” the superintendent, staff can ask for that phrase before taking another step.

Just as important, Syn urges, is going “out of band:” If a voice message or email seems unusual, call the person back on a known good number or talk to them in person.

If human verification is one pillar, layered technical defenses are the other, says Clark.

User awareness training is a good place to start, but it can’t carry the load anymore,” he says. “To keep up with more sophisticated attacks, organizations need layered defenses that combine strong identity protection, continuous risk-based access controls and advanced email and endpoint detection.”

SUBSCRIBE: Sign up to get the latest EdTech content delivered to your inbox weekly.

 

What IT Leaders Can Do Now

For IT leaders in K–12, the experts outline a pragmatic roadmap that starts with identity, then layers on controls and culture.

“First, schools need to focus on locking down identity,” says Clark. “Then, they should ensure strong MFA is in place anywhere and everywhere it is applicable.” IT leaders also need to assume that some phishing attacks will get through, he says, and ensure there are strong prevention, detection and response controls in place.

Syn is equally emphatic about multifactor authentication.

“There’s a reason that banks give us a physical card and then make us memorize a pin. That MFA just works, and it has worked for decades, and we should use it everywhere,” he says. He also advocates for the use of hardware security keys such as YubiKeys. “It’s such a safe thing because you have a physical token that a bad actor can’t replicate, so I swear by that.”

Clark says schools also need to set clear guardrails around AI use, including what tools are approved and what data, especially personally identifiable information, should never be entered into AI systems. He says many AI tools are loosely governed or not internally controlled, which increases the risk of data leakage and supply chain risk.

“Additionally, every new AI platform introduces new identities, access paths and third parties that attackers can exploit or impersonate,” Clark says.

Syn says it’s worth keeping in mind that social engineering “dates back to biblical times” and is not something that will ever be completely solved.

“We have to be aware that this is not something we can cure,” Syn says. “We have to constantly look out for it and ask, ‘How can we prevent this? What is it we can do to keep ourselves and our families safe?’”

skynesher/Getty Images
Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.