DISTRIBUTED AGENCY AND DISTRIBUTED RESPONSIBILITY IN ARTIFICIAL AI SYSTEMS: AN INFOETHICAL ANALYSIS
DOI:
https://doi.org/10.18372/2412-2157.42.21006Keywords:
artificial intelligence, information ethics, distributed agency, distributed responsibility, AI governance, accountability, infosphere, socio-technical systems, trustworthy AI, ethics of technologyAbstract
The article examines distributed agency and distributed responsibility in artificial intelligence systems within contemporary information ethics. AI is considered as a socio-technical phenomenon that reshapes the philosophical understanding of action, responsibility, and human participation in technologically mediated environments. The aim and tasks. The article aims to provide a philosophical and ethical analysis of distributed agency and distributed responsibility in AI systems. The study revisits classical views of moral action, agency, and responsibility; examines the transformation of agency in the context of AI; clarifies the notion of artificial agency in Luciano Floridi’s information ethics; analyzes the responsibility gap; and outlines the significance of accountability, governance, and auditability for the ethical assessment of AI. Research methods. The study is based on an interdisciplinary methodology combining conceptual analysis, information ethics, philosophy of technology, philosophical anthropology, and socio-technical systems analysis. It also employs historical-philosophical reconstruction, comparative analysis, hermeneutic interpretation, and generalization. Research results. The article shows that the classical subject-centered model of moral action is insufficient for analyzing contemporary AI systems. In AI environments, action emerges through the interaction of human, artificial, institutional, and infrastructural components. AI is therefore interpreted not as a moral subject in the human sense, but as a form of artificial agency, while responsibility acquires a distributed and procedural character. Discussion. The discussion supports the view that the ethical problem of AI should be framed not only in terms of individual blame, but also in terms of accountability, traceability, governance, and socio-technical conditions of action. Conclusions. The spread of AI transforms the normative conditions under which agency, responsibility, and trust are understood. Responsibility is no longer reducible to retrospective blame alone; it increasingly depends on the ethical and institutional organization of the environments in which AI systems are designed, deployed, monitored, and used.
References
Список літератури
Арістотель. Нікомахова етика ; пер. з давньогрец. Київ : Аквілон-Плюс, 2002. 480 с.
Kant I. Groundwork of the Metaphysics of Morals / trans. and ed. by M. Gregor ; introd. by C. M. Korsgaard. Cambridge : Cambridge University Press, 1998.
Jonas H. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago : University of Chicago Press, 1984.
Kudina O., Verbeek P.-P. Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy. Science, Technology, & Human Values. 2019. Vol. 44, no. 2. P. 291–314. DOI: https://doi.org/10.1177/ 0162243918793711.
Verbeek P.-P. Some Misunderstandings about the Moral Significance of Technology. The Moral Status of Technical Artefacts / ed. by P. Kroes, P.-P. Verbeek. Dordrecht : Springer, 2014. P. 75–88. DOI: https://doi.org/10.1007/978-94-007-7914-3_5.
Verbeek P.-P. Toward a Theory of Technological Mediation: A Program for Postphenomenological Research. -Technoscience and Postphenomenology: The Manhattan Papers : collective monograph. Lexington, 2016. P. 189–204. DOI: https://doi.org/10.5040/9781978731929.ch-0013.
Floridi L., Sanders J. W. On the Morality of Artificial Agents. Minds and Machines. 2004. Vol. 14, no. 3. P. 349–379. DOI: https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Floridi L. AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis. Philosophy & Technology. 2025. Vol. 38, art. № 30. DOI: https://doi.org/10.1007/s13347-025-00858-9.
Floridi L. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford : Oxford University Press, 2023. 272 p. DOI: 10.1093/oso/9780198883098. 001.0001.
Mökander J., Schuett J., Kirk H., Floridi L. Auditing Large Language Models: A Three-Layered Approach. SSRN Electronic Journal. 2023. DOI: https://doi.org/10.2139/ssrn. 4361607.
Matthias, Andreas. 2004. The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. Ethics and Information Technology 6 (3): 175–183.
Novelli C., Taddeo M., Floridi L. Accountability in artificial intelligence: what it is and how it works. AI & Society. 2024. Vol. 39. P. 1871–1882. DOI: https://doi.org/10.1007/ s00146-023-01635-y.
European Commission. Ethics Guidelines for Trustworthy AI. Brussels : European Commission, 2019. URL: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Floridi L. A Conjecture on a Fundamental Trade-Off Between Certainty and Scope in Symbolic and Generative AI. Philosophy & Technology. 2025. Vol. 38. Art. 93. DOI: 10.1007/s13347-025-00927-z.
Díaz-Rodríguez N., Del Ser J., Coeckelbergh M., López de Prado M., Herrera-Viedma E., Herrera F. Connecting the dots in trustworthy Artificial Intelligence: From AI principles, ethics, and key requirements to responsible AI systems and regulation. Information Fusion. 2023. Vol. 99. DOI: https://doi.org/10.1016/j.inffus.2023.101896.
Johnson D. G., Verdicchio M. Reframing AI Discourse. Minds and Machines. 2017. Vol. 27, no. 4. P. 575–590.
References
Aristotle. 2002. Nikomakhova etyka [Nicomachean Ethics]. Kyiv: Akvilon-Plius.
Kant, Immanuel. 1998. Groundwork of the Metaphysics of Morals. Translated and edited by Mary Gregor. Introduction by Christine M. Korsgaard. Cambridge: Cambridge University Press.
Jonas, Hans. 1984. The Imperative of Responsibility: In Search of an Ethics for the Technological Age. Chicago: University of Chicago Press.
Kudina, Olya, and Peter-Paul Verbeek. 2019. “Ethics from Within: Google Glass, the Collingridge Dilemma, and the Mediated Value of Privacy.” Science, Technology, & Human Values 44 (2): 291–314. https://doi.org/10.1177/ 0162243918793711.
Verbeek, Peter-Paul. 2014. “Some Misunderstandings about the Moral Significance of Technology.” In The Moral Status of Technical Artefacts, edited by Peter Kroes and Peter-Paul Verbeek, 75–88. Dordrecht: Springer. https://doi.org/10.1007/978-94-007-7914-3_5.
Verbeek, Peter-Paul. 2016. “Toward a Theory of Technological Mediation: A Program for Postphenomenological Research.” In Technoscience and Postphenomenology: The Manhattan Papers, 189–204. London: Lexington
Books. https://doi.org/10.5040/9781978731929.ch-0013.
Floridi, Luciano, and Jeff W. Sanders. 2004. “On the Morality of Artificial Agents.” Minds and Machines 14 (3): 349–79. https://doi.org/10.1023/B:MIND.0000035461.63578.9d.
Floridi, Luciano. 2025. “AI as Agency without Intelligence: On Artificial Intelligence as a New Form of Artificial Agency and the Multiple Realisability of Agency Thesis.” Philosophy & Technology 38 (30). https://doi.org/10.1007/s13347-025-00858-9.
Floridi, Luciano. 2023. The Ethics of Artificial Intelligence: Principles, Challenges, and Opportunities. Oxford: Oxford University Press. https://doi.org/10.1093/oso /9780198883098. 001.0001.
Mökander, Jakob, Jonas Schuett, Hannah Kirk, and Luciano Floridi. 2023. “Auditing Large Language Models: A. Three-Layered Approach.” SSRN Electronic Journal. https://doi.org/10.2139/ssrn.4361607.
Matthias, Andreas. 2004. “The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.” Ethics and Information Technology 6 (3): 175–183.
Novelli, Claudio, Mariarosaria Taddeo, and Luciano Floridi. 2024. “Accountability in Artificial Intelligence: What It Is and How It Works.” AI & Society 39: 1871–82. https://doi.org/10.1007/s00146-023-01635-y.
European Commission. 2019. Ethics Guidelines for Trustworthy AI. Brussels: European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
Floridi, Luciano. 2025. “A Conjecture on a Fundamental Trade-Off Between Certainty and Scope in Symbolic and Generative AI.” Philosophy & Technology 38: 93. https://doi.org/10.1007/s13347-025-00927-z.
Díaz-Rodríguez, Natalia, Javier Del Ser, Mark Coeckelbergh, Marcos López de Prado, Enrique Herrera-Viedma, and Francisco Herrera. 2023. “Connecting the Dots in Trustworthy Artificial Intelligence: From AI Principles, Ethics, and Key Requirements to Responsible AI Systems and Regulation.” Information Fusion 99: 101896. https://doi.org/10.1016/j.inffus.2023.101896.
Johnson, Deborah G., and Mario Verdicchio. 2017. “Reframing AI Discourse.” Minds and Machines 27 (4): 575–90.
Downloads
Published
How to Cite
Issue
Section
License

This work is licensed under a Creative Commons Attribution 4.0 International License.
The scientific journal adheres to the principles of Open Access and provides free, immediate, and permanent access to all published materials without financial, technical, or legal barriers for readers.
All articles are published in Open Access under the Creative Commons Attribution 4.0 International (CC BY 4.0) license.
Copyright
Authors who publish their works in the journal:
-
retain the copyright to their publications;
-
grant the journal the right of first publication of the article;
-
agree to the distribution of their materials under the CC BY 4.0 license;
-
have the right to reuse, archive, and distribute their works (including in institutional and subject repositories), provided that proper reference is made to the original publication in the journal.