David Danks, head of the philosophy department at Carnegie Mellon University, has a message for his colleagues in the CMU robotics department: As they invent and develop the technologies of the future, he encourages them to consider the human dimensions of their work.
His concern? All too often, Danks says, technological innovation ignores the human need for ethical guidelines and moral standards. That’s especially true when it comes to innovations such as artificial intelligence and automation, he says.
“It’s, ‘Look at this cool technology that we’ve got. How can you stand in the way of something like this?’” says Danks. “We should be saying, ‘Wait a second. How is this technology affecting people?’”
As an example, Danks points to AI-powered medical diagnostic systems. Such tools have great potential to parse data for better decision-making, but they lack the social interaction between patient and physician that can be so important to those decisions. It’s one thing to have a technology that can diagnose a patient with strep throat and recommend a certain antibiotic, but what about a patient with cancer who happens to be a professional violinist?
“For most people, you’d just give them the most effective drug,” says Danks. “But what do you do if one of the side effects of that medication is hand tremors? I see a lot of possibilities with AI, but it’s also important to recognize the challenges.”
Experts Ponder Ethical Implications of AI
Danks isn’t the only philosopher pondering the potential impact of AI. An increasing number of institutions are offering courses on the subject and establishing centers to study the issue. At Stanford University, students can register for “The Social and Economic Impact of Artificial Intelligence,” while Harvard University’s Berkman Klein Center for Internet and Society is funding a joint AI fellowship program to investigate the effect of AI systems on the public good.
And even at Carnegie Mellon, Danks isn’t alone. In 2016, international law firm K&L Gates established the $10 million Endowment for Ethics and Computational Technologies, which will support research in engineering, public policy and other disciplines exploring the ethical implications of “intelligent” machines. The funding will also enable the university to establish new faculty chairs, scholarships and fellowships.
Danks’s own work spans ethical issues associated with autonomous weapons and AI-based diagnostic technologies in healthcare. “There are certainly advantages to allowing machines to make decisions,” he says. “But we also have to ask, where do we draw the line? When is it important to have that subtle, context-dependent judgment that is only possible when a human is involved?”
Interdisciplinary Inquiry Furthers AI Research
One Carnegie Mellon faculty member who is looking for answers — and helping his students to internalize the questions — is robotics professor Illah Nourbakhsh. He has taught a course on ethics and robotics since 1997 and leads a technology-focused initiative called the CREATE Lab at CMU’s Robotics Institute.
Nourbakhsh believes that institutions with computer science, engineering and technology programs have a responsibility to make “ethical thinking” a part of their culture. That way, he says, “when these future engineers and innovators decide what technologies they’re going to invent, they consider the downstream consequences before they unleash them on the world.”
Nourbakhsh believes that AI technologies can safely bridge the cognitive gap that is growing between humans and their machines, but only if humans design those tools within an ethical framework. We now live in an age, he says, “where the data sets and systems we deal with are beyond the ability of human decision-making.”