2. Is This Tool Worth the Resources It Consumes?
Most IT teams don’t think about computing requirements for typical applications, but AI is a different beast. Apps can cost a lot of money to run, whether on local servers or via cloud-based services. Make sure you have a very clear idea of how much money a given application is going to cost, and get budget approval for the continuing expense.
3. Is Training the AI Going to Cause a Problem?
There are many types of AI tools, but a lot of buzz surrounds large language models, which use enormous amounts of data to train them. When you feed existing data into an LLM, you are training the tool to include whatever biases and inequities exist in that data. For example, you probably don’t want your tool making different curriculum recommendations for students based solely on their race. Make sure that the LLM or neural network training uses legitimate data and not inappropriate or stereotyped proxy indicators.
LEARN MORE: Schools are implementing holistic AI strategies.
4. Is AI the Right Technology for the Task at Hand?
When blockchain first burst onto the scene, a lot of IT teams wanted to apply that underlying technology to almost every problem, even when it made no sense. AI isn’t as niche as blockchain, but it’s not always the answer. Make sure whoever is pitching an AI project isn’t just going for the buzzword du jour but has chosen the right technology to solve a real problem.
5. Is This an Ethical Use of Technology?
AI can sometimes cross ethical lines by turning users into part of the tool development process itself. It’s worth looking at your tool from the perspective of an ethics committee to determine whether users are adequately protected from harm.