Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Mar 18 2026
Artificial Intelligence

Applying AI Risk Frameworks in Higher Education: What IT Leaders Need to Know

Visibility and threat identification are key to managing AI-related risks.

Higher education IT leaders are approaching artificial intelligence adoption with cautious optimism. There is a very real sense that AI can transform student services, research and operations across campus. At the same time, there is a concern that no one fully understands where AI is running, what data it can see or how exposed institutions truly are.

In that respect, universities are not all that different from commercial organizations. I would estimate that a small fraction of the largest commercial organizations feel they have some handle on AI risk, and even they are still discovering blind spots. The rest are spread across a spectrum that runs from “We’re trying to get there” to “We don’t know where to begin.” Higher education mirrors this landscape. Some large systems and flagship institutions have resources and staff dedicated to AI governance, while many others are still wrestling with the basic questions of scope, ownership and accountability.

That is where AI risk frameworks — particularly, the emerging discipline of continuous threat exposure management (CTEM) — become important. It is tempting to think about AI risk purely in terms of tools and vendors, but before we get to the tools themselves, we need a framework.

Click the banner below to learn how organizations are implementing artificial intelligence.

 

What Is an AI Framework?

An AI framework is a toolset that supports an environment for designing, training and implementing AI models. It forces you to ask which systems you’re talking about, what kinds of data they touch, who owns decision-making, and how you will determine whether your efforts are working.

With a framework in place, you can start to apply consistent criteria, patterns and controls to your AI initiatives. That is the only realistic way to keep up with the pace at which AI is entering your campus. Some AI tools come in through centrally procured platforms. Some arrive via departmental projects or faculty-led pilots. Some show up as shadow AI, as students or staff adopt new tools outside of IT’s purview. A framework gives you a way to manage all of that without having to personally inspect every new initiative.

Continuous Threat Exposure Management Helps Manage Risk

CTEM is an area that will become central to how organizations, including universities, handle AI risk over the next nine to 12 months. It’s about answering three questions on an ongoing basis: What is in my environment? Which of those assets matter most to my risk posture? How do I continuously reduce that risk as new technologies and use cases appear?

Those questions are especially challenging in higher education because the environment is so diverse. You’re dealing with on-premises and cloud systems, and IoT and operational technology across campus. You have departmental applications that may never have passed through central IT. On top of that, you have agentic AI tools and services that staff, faculty and students are pulling in to solve their own problems.

DISCOVER: AI in higher ed comes with security risks.

Most institutions can see pieces of this landscape, but not enough of it to reliably separate harmless noise from serious threats. That is where CTEM comes into play. Before you can govern AI, you need to understand your assets. You need to know what systems exist, where they reside, what data they process and how they connect to each other. In higher ed, that includes student support tools, analytics and research environments, identity platforms, learning management systems — anywhere AI interacts with sensitive data.

ServiceNow Acquisitions To Bolster CTEM Capabilities

ServiceNow is becoming the central platform where the CTEM approach comes to life as it integrates its recent acquisitions of Armis and Veza into its ecosystem. These acquisitions will ensure institutions can see all of their assets in one place and then apply consistent governance and risk controls. The goal is to move away from stitching together multiple tools and instead give IT teams a single environment where CTEM-style asset visibility and AI‑related risk management work together, making it easier to identify true threats.

These capabilities will be available in a couple of ways. If a customer already has ServiceNow, we can effectively bolt these components onto their existing platform. If they don’t, we can still deliver the same risk capabilities via ServiceNow’s Integrated Risk Management offerings as a stand-alone deployment.

UP NEXT: Here are four AI trends to watch this year.

AI Frameworks Should Encourage Innovation With Limits

Higher education adds one more wrinkle: the culture of innovation. Universities thrive on academic freedom. IT teams don’t want to be seen as the department of “no,” reflexively shutting down AI initiatives because the risk feels too daunting to address. The real promise of CTEM and structured risk frameworks is that they let you change “no” to “yes — within these boundaries.”

Once you have visibility into your environment and a way to prioritize what matters, you can start creating a walled garden, encouraging experimentation with AI in an environment where you have visibility, guardrails and control. The goal is not to lock everything down; it is to create conditions where AI adoption is not outpacing your ability to protect the institution.

My advice to IT leaders is to start with your AI-powered service desk and student support agents. That’s the fastest path to real operational lift — and the fastest path to risk — because those tools sit on top of identity, tickets, HR and student records, and they often take automated actions. Apply an AI risk framework there first: Map every use case and data flow, classify the decisions AI can influence, and set guardrails such as human approval of high-impact actions, least-privilege access, full audit logging and clear retention policies. If you can govern AI in support workflows, you can govern it anywhere on campus.

This article is part of EdTech: Focus on Higher Education’s UniversITy blog series featuring analysis and recommendations from CDW experts.

Franci Leoncio/Getty Images