What Is an AI Framework?
An AI framework is a toolset that supports an environment for designing, training and implementing AI models. It forces you to ask which systems you’re talking about, what kinds of data they touch, who owns decision-making, and how you will determine whether your efforts are working.
With a framework in place, you can start to apply consistent criteria, patterns and controls to your AI initiatives. That is the only realistic way to keep up with the pace at which AI is entering your campus. Some AI tools come in through centrally procured platforms. Some arrive via departmental projects or faculty-led pilots. Some show up as shadow AI, as students or staff adopt new tools outside of IT’s purview. A framework gives you a way to manage all of that without having to personally inspect every new initiative.
Continuous Threat Exposure Management Helps Manage Risk
CTEM is an area that will become central to how organizations, including universities, handle AI risk over the next nine to 12 months. It’s about answering three questions on an ongoing basis: What is in my environment? Which of those assets matter most to my risk posture? How do I continuously reduce that risk as new technologies and use cases appear?
Those questions are especially challenging in higher education because the environment is so diverse. You’re dealing with on-premises and cloud systems, and IoT and operational technology across campus. You have departmental applications that may never have passed through central IT. On top of that, you have agentic AI tools and services that staff, faculty and students are pulling in to solve their own problems.
DISCOVER: AI in higher ed comes with security risks.
Most institutions can see pieces of this landscape, but not enough of it to reliably separate harmless noise from serious threats. That is where CTEM comes into play. Before you can govern AI, you need to understand your assets. You need to know what systems exist, where they reside, what data they process and how they connect to each other. In higher ed, that includes student support tools, analytics and research environments, identity platforms, learning management systems — anywhere AI interacts with sensitive data.
ServiceNow Acquisitions To Bolster CTEM Capabilities
ServiceNow is becoming the central platform where the CTEM approach comes to life as it integrates its recent acquisitions of Armis and Veza into its ecosystem. These acquisitions will ensure institutions can see all of their assets in one place and then apply consistent governance and risk controls. The goal is to move away from stitching together multiple tools and instead give IT teams a single environment where CTEM-style asset visibility and AI‑related risk management work together, making it easier to identify true threats.
These capabilities will be available in a couple of ways. If a customer already has ServiceNow, we can effectively bolt these components onto their existing platform. If they don’t, we can still deliver the same risk capabilities via ServiceNow’s Integrated Risk Management offerings as a stand-alone deployment.
UP NEXT: Here are four AI trends to watch this year.
AI Frameworks Should Encourage Innovation With Limits
Higher education adds one more wrinkle: the culture of innovation. Universities thrive on academic freedom. IT teams don’t want to be seen as the department of “no,” reflexively shutting down AI initiatives because the risk feels too daunting to address. The real promise of CTEM and structured risk frameworks is that they let you change “no” to “yes — within these boundaries.”
Once you have visibility into your environment and a way to prioritize what matters, you can start creating a walled garden, encouraging experimentation with AI in an environment where you have visibility, guardrails and control. The goal is not to lock everything down; it is to create conditions where AI adoption is not outpacing your ability to protect the institution.
My advice to IT leaders is to start with your AI-powered service desk and student support agents. That’s the fastest path to real operational lift — and the fastest path to risk — because those tools sit on top of identity, tickets, HR and student records, and they often take automated actions. Apply an AI risk framework there first: Map every use case and data flow, classify the decisions AI can influence, and set guardrails such as human approval of high-impact actions, least-privilege access, full audit logging and clear retention policies. If you can govern AI in support workflows, you can govern it anywhere on campus.

