Skip to main content
Skip to main content

The Prospect of Moral Artificial Agents

Humanities

Abstract

Artificial agent development is motivated by the dream of making machines perform undesirable labor instead of humans. To replace humans in undesirable labor, it follows that the machines should not engage in actions that will lead to devastating consequences. They should be “moral” artificial agents. In my paper, I deal with two questions on this concern: 1. What should be the direction of moral artificial agent development? 2. Is the idea of a moral artificial agent coherent? The paper gets to the perhaps more compelling second question by eliminating possible answers to the first question: which includes most current AI Ethics projects. Since there are limited desirable options in developing artificial moral agents, possible conceptions of moral artificial agents are also limited. In these limited possible conceptions of moral artificial agents, the justificatory process provided by the artificial moral agents can only be unreliable. Therefore, it is impossible to attribute to artificial agents independent moral agency.

Jun Kyung You
Weinberg College of Arts and Sciences
Senior Thesis Completed in 2019 with funding from the Office of Undergraduate Research
Advisor: Axel Mueller
Major: Philosophy
DOI: 10.21985/N2N48T
Download