Colleges Confront the Ethical Challenges of AI
As AI is being rolled out across numerous industries, several ethical concerns have emerged in early implementations. The most infamous is algorithmic bias, with AI systems adopting and perpetuating systemic biases apparent in the data they are given to learn from. There have been cases where images of men were associated with professional qualities while women were judged on their appearance, and cases where white patients were prioritized over black patients in hospitals. Such biases making their way into the AI used at colleges — for example, in enrollment selection — would be antithetical to the spirit of inclusion to which higher education aspires.
Privacy is another concern around AI, which requires data, much of it personal, to function efficiently. For example, predictive analytics tools seeking to identify at-risk students would likely draw on course loads, grades and other personal information to which a college might have access. Students may not approve of that, especially if colleges have not provided enough transparency about the data and AI they are using.
As colleges rely more on AI, that also means they may collect new types of data and use that data in new ways. Along with privacy concerns, these types of initiatives also create new responsibilities related to cybersecurity risk management. Colleges must ensure that AI programs and the data on which they are based have the same level of protections as other types of sensitive data and processes.
RELATED: See what’s in store for AI in higher education.
AI Policies Should Govern Higher Education Initiatives
Maintaining ethical procedures and outcomes related to AI is best accomplished by first defining these on an organizational level. Before rolling the technology out, colleges should set standards and put policies in place to address several important questions. What data will they collect from students to support these initiatives? How will they use that data? How will they protect it, and who will have access? What transparency will colleges offer to students?
The answers should be guided by the responsibilities that colleges will assume by using AI for specific applications.
The Pentagon may seem like an unlikely example for universities, but it is an instructive one. It has clearly defined top-level parameters that inform its use of AI. The work must be reliable (clearly defined uses), equitable (minimizing unintended bias), traceable (overseen by a well-educated team) and governable (capable of having unintended consequences easily identified). As AI starts to become more commonplace across industries, these types of models — adjusted for the unique needs of specific fields — will also become more prevalent and serve to guide best practices.
The Pentagon’s parameters, adapted to higher education, would serve any college well, but it’s also important to avoid falling into the trap of rigidity. AI will be a moving target as it continues to learn, develop and advance. In other words, it will be highly adaptable. Higher education in its use of AI will have to be too.