2. Is This Tool Worth the Resources?
Most IT teams don’t think about computing requirements for typical applications, but AI is a different beast. Apps can cost a lot of money to run, whether on local servers or via cloud-based services. Make sure you have a very clear idea of how much money a given application is going to cost, and get the budget side of the university to sign off on the continuing expense.
3. Is Training the AI Going To Cause a Problem?
There are many types of AI tools, but a lot of buzz surrounds large language models, which use enormous amounts of data to train them. When you feed existing data into an LLM, you are training the tool to include whatever biases and inequities exist in that data. For example, you probably don’t want your tool making different curriculum recommendations for students solely based on their race. Make sure that the LLM or neural network training gets the real data that matters and not inappropriate or stereotyped proxy indicators.
4. Is AI the Right Technology for Our Needs?
When bitcoin and blockchain first burst onto the scene, lots of IT teams wanted to apply their underlying technologies to almost every problem, even when it made no sense. AI isn’t as niche as blockchain, but it’s not necessarily always the right answer. Make sure whoever is pitching an AI project isn’t going for the buzzword du jour but really has selected the correct technology to solve a real problem.
5. Is This an Ethical Use of Technology?
Universities have human subject committees and institutional review boards for a reason: to help researchers understand if their work is ethical and if the subjects are adequately protected from harm. IT teams might have never talked to IRBs in the past, but AI can cross the line by turning users into part of the tool development process itself. It may sound strange, but it’s worth looking at your tool from the perspective of an ethics committee and even doing a writeup and asking the committee’s opinion. It’s a lot easier to get this question answered early in development if the IRB raises a red flag.