Feb 09 2024

How Temple University Created an Artificial Intelligence Policy

Research and communication helped Temple University teaching and learning experts determine how AI should be used in the classroom.

When ChatGPT hit the scene in late 2022, we at Temple University knew our community would be looking for guidance. This technology was going to be a disruptor, and we needed to get ahead of the inevitable questions from our faculty and students.

We are not IT experts, but in our day-to-day work with educational technologists, we tend to approach IT from a pedagogical angle, so we dived into learning everything we could about generative artificial intelligence and its possible impacts on and uses for education. We also leaned into a process of discovery that was — and still is — a dialogue among groups from across campus.

Creating Learning Materials Through Focus Groups and Research

The first step we took was to engage with stakeholder groups that might have a vested interest in the policies we were putting together. This included university AI specialists, the faculty senate and those teaching in fields that would be most impacted by the technology. We wanted to hear their concerns; they wanted to know what we were learning and how they should be thinking about using AI for teaching and learning.

We began publishing blog posts focusing on how a return to evidence-based pedagogical fundamentals, especially regarding assessments, could help us navigate the challenges of AI. Listening to the experiences of stakeholders from across the university and experimenting with possible use cases was an important step in determining a university stance on using generative AI for teaching.

Click the banner below to learn how to optimize your university’s device management program.

Academic Integrity Tools Offer Mixed Results

It’s hard to talk about generative AI in the classroom without mentioning academic integrity, and for good reason: When used to its full potential, generative AI can complete assignments for students. There are a number of tools on the market designed to spot AI-generated content in student work, but these can be problematic.

In response, we are trying to reframe the narrative around cheating. Instead of leaning on AI content detectors, why not address cheating at the source? We talked with faculty about what causes cheating, how we can prevent it, and how we can encourage students to do the work.

At Temple, faculty must complete an asynchronous training course before they are given access to the AI detection tool. The course explains how the AI detector works, why it can fail to give accurate results and what potential problems that can arise from using it. The course also provides suggestions on how to set clear expectations for the use of generative AI for class work and how to talk to students if cheating is suspected. We want to be sure faculty are fully informed and, if they choose to use an AI detector that they do so responsibly, with a clear understanding that AI detection results are not incontrovertible evidence of academic misconduct.

RELATED: How AI could impact student success in higher education.

Establishing a Blanket Policy Sets a Universitywide Standard

After gathering information, experimenting with potential assignments and activities that utilize AI, and providing faculty resources, we advocated for a temporary blanket policy for the 2023-2024 academic year.

We felt that the decision to allow the use of AI belonged to faculty, but we also knew that they needed breathing room to make this decision. Therefore, the policy states that AI is not to be used in classes unless an instructor explicitly notes in the syllabus that it is allowed.

As the technology advances, our policy will likely evolve, as will our faculty training. It’s up to our instructors to determine whether they will allow AI in their classes, so we will continue to provide resources such as blog posts, videos, drop-in clinics and workshops designed to help them make this decision.

When used with intention, AI can be a useful teaching and learning tool. Our goal now is to ensure that our faculty are prepared to introduce it to their students, and that they understand its potential and pitfalls so they can make fully informed decisions about whether or how it can be used.

UP NEXT: Important security considerations for embracing AI.

andresr/Getty Images

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.