Main Article Content
Abstract This paper reviews the literature on the moral status of artificial intelligence (AI), emphasizing that although a majority of philosophers agree that AI could plausibly have moral status based on capacities, there is disagreement about the specific degree to which such AI has moral status. I begin by defining the specific type of AI relevant to the moral status debate: artificial general intelligence (AGI). I then provide a brief description of Immanuel Kant’s sophisticated cognitive capacities approach to grounding moral status, which continues to dominate discussions of moral status, despite its significant drawbacks. The following section builds upon the Kantian account by detailing how philosophers overwhelmingly ground AI’s moral status in capacities, and argue that if AI has similar capacities to that of an adult human being, then AI has similar moral status to that of an adult human being. Next, I explore the competing views on the degree to which AI has moral status, mainly as moral patients vs. moral agents. To conclude, I offer my own account of the moral status of AI, based on the literature reviewed: that AI’s moral status is grounded in its capacities, and whether it is a moral patient or moral agent is context-dependent. Such a conclusion is certainly not the only plausible answer, but hopefully serves to contribute to growing discussion around the moral status of AI and what considerations we might owe to such entities. Indeed, the discussion has, for too long, centered on what AI might to do us, and not what we might do to AI.
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication, with the work simultaneously licensed under a Creative Commons Attribution License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.