Jul 03 2024
Management

How to Craft a Generative AI Use Policy in Higher Education

With students, faculty and staff already using tools such as ChatGPT, universities need to set guardrails for how and why generative artificial intelligence is implemented.

There’s no point in waiting any longer: If your college or university doesn’t have a generative AI use policy, it’s time to build one before the fall semester gets here.

There’s plenty of guidance from schools that already have policies on generative artificial intelligence, including dozens of examples from across the higher education landscape. There’s also plenty of need for AI policy guidance, judging by the 2024 EDUCAUSE AI Landscape Study, which asked higher education leaders how well their employers are developing acceptable use policies on AI.

“Only 23% of respondents indicated that their institution has any AI-related acceptable use policies already in place, and nearly half (48%) of respondents disagreed or strongly disagreed that their institution has appropriate policies and guidelines in place to enable ethical and effective decision-making about AI use,” EDUCAUSE reports.

Click the banner to learn how an experienced partner can help colleges navigate the new AI world.

 

Getting Started on Your Generative AI in Higher Education Use Policy

Why aren’t more colleges making more progress on generative AI use policies?

“Getting started can feel daunting,” says Jenay Robert, an EDUCAUSE senior researcher, who wrote the AI Landscape Study.

There’s so much to think about when developing a policy, including cheating, fairness and data security. But one convention in technology development circles could be a good rule of thumb: Start small, figure out what does and doesn’t work, and build from there.

You could begin with the basics, giving students clear guidance on acceptable AI use. Stanford University puts it this way: “Using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.”

When moving beyond the basics, you’ll need a flexible foundation because AI technologies are rapidly evolving.

“Standardized, one-size-fits-all AI policies are not sustainable in the long term,” the experts at Duke University caution. Set aside time to understand the issues at stake for faculty, students, staff and administrators. The International Journal of Educational Technology in Higher Education provides a strong overview based on research with students and institutions in Hong Kong.

RELATED: Learn how Temple University created an artificial intelligence policy.

What to Do About Academic Integrity and Generative AI

Generative AI makes it easy for students to create text that seems like a human wrote it. And tools for detecting AI-generated text are notoriously unreliable, as this report from the University of Kansas Center for Teaching Excellence points out.

However, there’s no denying that students can cut corners and use generative AI when under pressure to finish assignments.

One solution is to change the design and structure of assignments, which can substantially reduce students’ likelihood of cheating — and enhance their learning, Yale University notes.

Jenay Robert headshot
Our students are part of a digital workforce and a digital society. How can we help students become good digital citizens?”

Jenay Robert Senior Researcher, EDUCAUSE

Princeton University builds on the concept: “After students complete their own drafts of an assigned essay, invite them to request a draft of the assignment from ChatGPT. Facilitate a discussion in which students analyze how the drafts compare, for better and for worse.”

These viewpoints underscore the shifting perspective on generative AI.

“Our students are part of a digital workforce and a digital society,” Robert says. “How can we help students become good digital citizens? All of that will tie back to policy in some way.”

For instance, extremely restrictive AI policies could prevent a professor from showing what happens when generative AI produces inaccurate or biased outputs, Robert notes.

Three Areas of Generative AI Policy Focus for Higher Education

Robert also co-authored the 2024 EDUCAUSE Action Plan: AI Policies and Guidelines. It advises higher education leaders to take three perspectives into account when crafting AI policies.

  1. Governance. Generative AI uses predictive models to create sentences and paragraphs. These predictions are not 100 percent accurate, occasionally causing hallucinations such as fact errors and nonexistent citations. Models also can produce biased outputs that do not account for social and cultural differences in the user population. Generative AI policies should address such issues of ethics, equity and accuracy, Robert says.

    Everyday use of ChatGPT and similar tools by faculty, staff and students could also pose serious risks and legal liabilities. Clemson University cautions against allowing people to enter private data into public generative AI models, which can use data from chats to train the models. Putting sensitive personal data such as Social Security and credit card numbers into a public generative AI application could violate federal and state privacy and data protection laws.

  2. Pedagogy. College professors and instructors generally have wide latitude in how they conduct their classrooms and assign coursework. Schools that already have generative AI policies encourage professors to establish clear and specific generative AI guidance for their courses, laying out what is allowed and forbidden in the course syllabus.

    A suggestion from the University of California, Los Angeles says to “ask students to use ChatGPT and ‘fact check’ the response provided by finding primary and secondary sources to back up the information provided.”

  3. Operations. Universities may be designing and maintaining AI infrastructures, which will require technical training and support. Leaders will also have suggestions for ways to implement AI to make operations more efficient and effective. Generative AI policies should account for these kinds of issues.

EDUCAUSE also recommends AI policy development activities at four levels:

  1. Individual. Engage students and faculty to find out how they use generative AI and how they feel about ethics and the impact on learning.
  2. Department or unit. Assess the role of generative AI in academic programs. Find common ground between departments and break down silos.
  3. Institution. Establish an AI governing body for oversight and guidelines that foster equity and accuracy.
  4. Multi-institution. Consult with other universities and private sector organizations to find out how they are handling generative AI challenges.

As institutions develop generative AI guidance, they should also establish ways to measure the success of their efforts. A simple survey, for instance, could show how well people are embracing a generative AI policy.

“Keep stakeholders engaged by sharing your wins early and often,” Robert says.

jacoblund/Getty Images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.