Distributed Moral Responsibility in the Field of Artificial Intelligence
DOI:
https://doi.org/10.21146/2074-4870-2024-24-1-129-143Keywords:
ethics of artificial intelligence, distributed cognition, distributed responsibility, sociotechnical system, agency, patient-oriented moralityAbstract
The main goal of the article is to justify the optimality of Luciano Floridi’s concept of distributed morality for describing a model of the relationship of responsibility in complex human – AI systems. The high level of its autonomy in functioning does not correspond to the classical instrumentalist approach to technology, as it turns out to be capable of fundamentally uncontrollable actions in the process of performing the task assigned to it. For the same reason, the classical definition of moral responsibility does not adequately correspond to the actual situation in the field of artificial intelligence development, which often leads to problems that hinder the development of this technology. Therefore, the article conceptualizes a new technology-appropriate way of describing the relationship of responsibility in the field of artificial intelligence ethics. To achieve this goal artificial intelligence is considered as a participant in various interactions, including social ones, possessing agency, but at the same time as not attributed with true intentionality, motivation, awareness and other properties that are necessary for attributing agency in the classical model of the relationship of responsibility. In addition to referring to Floridi’s description of agency criteria, the justification of such a consideration of AI is based on Annamarie Mol and Marianne de Laet’s concept of fluidity which is developed by them, among others, to denote the ability of technologies to possess agency without having the properties considered necessary for attributing agency in the classical model of the role of technologies in social processes. AI technology is analyzed as a complex heterogeneous sociotechnical system made up of both human and non-human agents. The risk of leveling the significance of responsibility in such a system is overcome at the conceptual level by adapting the approach of Floridi’s patient-oriented morality. Changing the emphasis from the subject of action to the object of influence gives moral significance to the entire sum of morally neutral actions that are performed by agents within the framework of a sociotechnical system. Since it produces morally significant effects and impacts, it also acts as a bearer of the relation of responsibility which, as shown in the article, applies to each agent of the system equally thereby intensifying moral responsibility in the field of AI.