Jan 05 2021

As Colleges Adopt AI, They’ll Need Ethical Frameworks

Artificial intelligence delivers cost savings and efficiencies, but new data uses require care and transparency.

The rapid advancement of artificial intelligence promises to be an innovative boon to many industries, including higher education. As nascent as AI may be as an emerging technology, colleges are already finding numerous ways to leverage this powerful tool for efficiencies in student services, classroom support and back-office operations.

AI programs are reviewing student assignments for plagiarism and creating live transcriptions of faculty lectures. Chatbots are answering questions about financial aid, registration and campus life. Virtual teaching assistants are reminding students of office hours, exam logistics and essay file formats. Colleges are even tapping predictive analytics to find and market to future students, nudge those who are taking a break from academics to re-enroll and identify those at risk of dropping out.

Even as colleges expand and advance their use of AI, they will have to contend with logistical challenges around implementation as well as — more important — ethical ones.

Colleges Confront the Ethical Challenges of AI

As AI is being rolled out across numerous industries, several ethical concerns have emerged in early implementations. The most infamous is algorithmic bias, with AI systems adopting and perpetuating systemic biases apparent in the data they are given to learn from. There have been cases where images of men were associated with professional qualities while women were judged on their appearance, and cases where white patients were prioritized over black patients in hospitals. Such biases making their way into the AI used at colleges — for example, in enrollment selection — would be antithetical to the spirit of inclusion to which higher education aspires.

Privacy is another concern around AI, which requires data, much of it personal, to function efficiently. For example, predictive analytics tools seeking to identify at-risk students would likely draw on course loads, grades and other personal information to which a college might have access. Students may not approve of that, especially if colleges have not provided enough transparency about the data and AI they are using.

As colleges rely more on AI, that also means they may collect new types of data and use that data in new ways. Along with privacy concerns, these types of initiatives also create new responsibilities related to cybersecurity risk management. Colleges must ensure that AI programs and the data on which they are based have the same level of protections as other types of sensitive data and processes.

RELATED: See whats in store for AI in higher education.

AI Policies Should Govern Higher Education Initiatives

Maintaining ethical procedures and outcomes related to AI is best accomplished by first defining these on an organizational level. Before rolling the technology out, colleges should set standards and put policies in place to address several important questions. What data will they collect from students to support these initiatives? How will they use that data? How will they protect it, and who will have access? What transparency will colleges offer to students?

The answers should be guided by the responsibilities that colleges will assume by using AI for specific applications.

The Pentagon may seem like an unlikely example for universities, but it is an instructive one. It has clearly defined top-level parameters that inform its use of AI. The work must be reliable (clearly defined uses), equitable (minimizing unintended bias), traceable (overseen by a well-educated team) and governable (capable of having unintended consequences easily identified). As AI starts to become more commonplace across industries, these types of models — adjusted for the unique needs of specific fields — will also become more prevalent and serve to guide best practices.

The Pentagon’s parameters, adapted to higher education, would serve any college well, but it’s also important to avoid falling into the trap of rigidity. AI will be a moving target as it continues to learn, develop and advance. In other words, it will be highly adaptable. Higher education in its use of AI will have to be too.

Getty Images/ipopba