Close

New Workspace Modernization Research from CDW

See how IT leaders are tackling workspace modernization opportunities and challenges.

Mar 04 2026
Artificial Intelligence

Determining the Roles Required for Effective AI Governance in Higher Ed

A wide array of stakeholders, including IT services and security teams, are needed for guidance on artificial intelligence policies.

From driving efficiencies in the admissions process to assisting students as on-demand tutors and accelerating data analysis in research, the speed with which artificial intelligence technologies have infused every aspect of campus life is dizzying. Unfortunately, few universities have a fully operationalized governance framework to manage AI use across the campus.

This presents a number of risks to the institution, including threats to data security, data privacy and academic integrity. As universities work to establish AI governance frameworks, one of the biggest challenges is determining who should have a seat at the table and how to divvy up the roles and responsibilities required for effective governance.

Who Are the Campus Stakeholders in AI Governance?

For many institutions, the issue of AI governance often surfaces first in the day-to-day activities of instruction and learning. For example, instructors may seek guidance from leadership on an AI use policy for student writing assignments. Over time, questions about AI use crop up throughout the institution, including student advising and student services, research programs, athletics, human resources, and campus operations. The wide range of campus stakeholders requiring guidance on AI use quickly becomes apparent.

Click the banner below to explore the latest artificial intelligence research.

 

Austin Community College established its Collegewide AI Strategic Planning Committee in fall 2024. CAISPC was charged with a number of tasks, including collecting data, making policy recommendations, building educational frameworks, developing guidelines and classifications for tool vetting and professional development, and communicating to and educating the college community.

“We work to make sure that all voices across the college are heard,” says Stephanie Long, co-chair of CAISPC at ACC. “Our membership includes critical stakeholders in academic leadership, including associate vice chancellors and academic deans; critical players in information technology, human resources, business services and student affairs; representation from our collegewide shared governance associations; and helpers across the college, from instructional designers to faculty and students. We have worked with our Student Rights and Responsibilities Committee, our office of assessment and our friends in IT as we rolled out enterprise access to Gemini and NotebookLM for students, faculty, staff, and various other stakeholders.”

Leadership’s Role in AI Governance

AI is both an emerging area for governance and one that touches a wide range of processes and interests, and some universities are opting to create new roles to manage AI policy. Cornell University recently named Thorsten Joachims as its vice provost for AI strategy. His role includes leadership of Cornell’s AI Strategy Council. North Carolina State University created the executive director of data science and AI academy role in 2021, naming Rachel Levy to this position, with responsibilities that include co-chairing the university’s campuswide Artificial Intelligence Advisory Group.

WATCH: Four AI trends to monitor in 2026.

“These leaders usually have a background in data and/or technology, may sit in the IT department, provost or other specific offices, and closely collaborate with a board of interdisciplinary leaders to create curriculums and establish guardrails,” says Frank Attaie, general manager of U.S. Public Sector at IBM. “This type of model, consisting of an interdisciplinary board with one or two leaders, can be effective, as it relies on clear accountability while encouraging perspectives from stakeholders across the university and facilitates nuanced discussion of complex use cases. When interdisciplinary teams are involved in AI governance discussions, the needs of a wider range of the university population can be represented.”

The IT Services Team Delivers Visibility

Effective governance can’t happen without visibility into how AI is being used on campus. The university’s IT services team plays a critical role in helping collect data on AI use cases, which are used to then set the parameters of AI policies on campus.

“Running this inventory can be especially complex for big universities with thousands of students, both those on campus and those pursuing remote degrees, and hundreds of faculty and staff,” Attaie says. “This is where universities may tap in technology like IBM’s watsonx Orchestrate to help them see all of the AI across the campus in a single dashboard, easily supervise outcomes and even orchestrate the work across autonomous agents. With the right tools that facilitate visibility, universities will get one step closer to making sure all AI operations happen within their determined safety guardrails and governance guidelines.”

DISCOVER: Get the tools and guidance that support your campus AI initiatives.

Managing Risk in AI Governance

Risk management and strategic planning are key components of governance. The IT security team plays an important role in formulating strategy and mitigating risk in support of AI governance policies.

“Right now, we’re focusing on how we can provide security teams with guardrails for AI,” says Rob Sheldon, senior director of public policy and strategy for CrowdStrike. “There are now tools available that can be set up and configured by the security team in advance to prevent anyone from using an AI resource in a way that introduces risk to the enterprise. We want to steer the departments providing access to AI tools to validate that they are securely handling data, including the use of multifactor authentication. You want to get to a point where you only allow the use of AI tools onto the network that the security team has validated and approved.”

The IT security team is especially useful for extending governance efforts into AI use situations that might otherwise fall outside of risk mitigation processes, such as individual AI model sessions.

“Sometimes, a student or staff member might inadvertently input sensitive personally identifiable information into a session with an AI model, which is against university policy but not preventable by the model itself,” says Sheldon. “Tools like CrowdStrike’s Falcon AI Detection and Response now exist for IT security teams to configure AI detection and to prevent this kind of data leakage from happening within a session. AI detection and response technology helps universities avoid the kind of lengthy, complex investigations and notifications that happen in the wake of such an incident.”

LEARN MORE: Managed services can simplify AI adoption.

Evolving Roles and Responsibilities in AI Governance

Even as an institution’s AI governance plan takes shape, the technology itself and how it is used on campus continues to evolve. Universities can expect the roles directly involved in defining policy, establishing accountability, conducting inventories of use cases, assessing risks, and overseeing monitoring and training will continue to evolve as well. With the technology and policy in a state of flux, flexibility is a must.

“One thing that has become clear through our robust and thoughtful discussions is that AI policy is not a one-and-done endeavor,” says Alex Watkins, co-chair of CAISPC at ACC. “As a committee, we have been working from an education-first perspective. Although we have introduced some policies and guidance, like requiring that all faculty add a statement in their syllabus about how AI can be used in their course, our primary focus has been on building the infrastructure and understanding needed for folks across ACC to make educated and informed decisions about AI use in their roles.”

Whatever changes may occur, a robust cross-section of the university community should always be at the heart of sound AI governance.

“Bring the people to the table who will enact and be affected by the policy,” Watkins says. “While the work may be slow, a grassroots, community-based solution will take less time to implement than a top-down approach. Respect differences in approaches and opinions about AI and talk a lot about what reasonable solutions and approaches to AI look like. When people feel safe and heard, true innovation can happen.”

da-kuk/Getty Images