AI Adoption Starts With Data Governance
When institutions ask us about AI adoption, they often start with the tools. Which platform is the best? How can they integrate tools like Copilot or Gemini into their existing environments? Should they build their own models?
But implementing AI means taking a step back from tool selection and focusing on your foundation, and the foundation of all AI tools and models is data. Institutions must first understand the data they have; where it's located; who owns it; and how it is classified, secured and maintained.
When you’re dealing with unclassified sensitive data, data with confusing or inconsistent definitions, or unsecured or poor-quality data, it can produce unexpected — and sometimes dangerous — outcomes when inputted into a large language model.
WATCH: Industry experts discuss AI’s 2026 trajectory.
Embedded AI vs. Custom AI
There are two broad flavors of AI in higher ed: embedded AI, in which capabilities are built into tools institutions already use, and custom AI, in which models and solutions are built on a college or university’s own institutional data. They require different levels of scrutiny, but both depend on the same underlying truth. If your data is wrong, out of date, poorly classified or badly secured, the AI will amplify those problems.
Even in embedded tools, we see trust issues. If AI drafts content in the wrong voice, uses a different language variant, or surfaces outdated or irrelevant information, users quickly lose confidence and disengage. That’s an operational problem as much as a technical one: If people don’t trust the outputs, they won’t adopt the tools, minimizing your ROI.
In higher education, custom AI solutions could include student success models, retention analytics or personalized advising, and here the stakes are even higher. Low-quality data leads to bad predictions, which lead to bad decisions. In a student’s life, that might mean misaligned course recommendations, missed risk signals for retention or an inaccurate assessment of support needs.
RELATED: Discover affordable strategies to strengthen your cybersecurity maturity.
What Data and AI Governance Look Like in Higher Ed
When we talk about data governance in universities, we’re talking about much more than a committee or a policy document. Governance defines where the data sits, who owns it and its level of quality. That sounds simple, but in higher ed, this can be a challenge.
In practice, poor data governance could look like inconsistent representations of “United States” in a dataset (“USA,” “US” and “United States of America,” for example) all treated as separate values. It could look like a confusing form field that means different things to different people. It could involve individual departments pulling and sharing their own data without considering the institutionwide implications.
Governance puts structure around this to eliminate confusion and create a single source of truth when it comes to definitions and expectations. It also addresses access controls, indicating who is allowed to see what, how sensitive data is protected and what is off-limits for AI use.
AI governance builds on that by focusing on how models are trained, evaluated and used. This ensures institutions are training AI models on data that they are allowed to use, that this data is fresh, and that there is a way to track and correct harmful or biased outcomes. AI governance also ensures that an institution can explain and defend how important decisions are being made if necessary.
SUBSCRIBE: Sign up to get the latest EdTech content delivered to your inbox weekly.
