Purpose: This paper aims to analyze the limitations of the mainstream definition of artificial intelligence (AI) as a rational agent, which currently drives the development of most AI systems. The authors advocate the need of a wider range of driving ethical principles for designing more socially responsible AI agents. Design/methodology/approach: The authors follow an experience-based line of reasoning by argument to identify the limitations of the mainstream definition of AI, which is based on the concept of rational agents that select, among their designed actions, those which produce the maximum expected utility in the environment in which they operate. The problem of biases in the data used by AI is taken as example, and a small proof of concept with real datasets is provided. Findings: The authors observe that biases measurements on the datasets are sufficient to demonstrate potential risks of discriminations when using those data in AI rational agents. Starting from this example, the authors discuss other open issues connected to AI rational agents and provide a few general ethical principles derived from the White Paper AI at the service of the citizen, recently published by Agid, the agency of the Italian Government which designs and monitors the evolution of the IT systems of the Public Administration. Originality/value: The paper contributes to the scientific debate on the governance and the ethics of AI with a critical analysis of the mainstream definition of AI.

AI: from rational agents to socially responsible agents / Vetrò, Antonio; Santangelo, Antonio; Beretta, Elena; De Martin, Juan Carlos. - In: DIGITAL POLICY, REGULATION AND GOVERNANCE. - ISSN 2398-5038. - STAMPA. - 21:3(2019), pp. 291-304. [10.1108/DPRG-08-2018-0049]

AI: from rational agents to socially responsible agents

Vetrò, Antonio;Santangelo, Antonio;Beretta, Elena;De Martin, Juan Carlos
2019

Abstract

Purpose: This paper aims to analyze the limitations of the mainstream definition of artificial intelligence (AI) as a rational agent, which currently drives the development of most AI systems. The authors advocate the need of a wider range of driving ethical principles for designing more socially responsible AI agents. Design/methodology/approach: The authors follow an experience-based line of reasoning by argument to identify the limitations of the mainstream definition of AI, which is based on the concept of rational agents that select, among their designed actions, those which produce the maximum expected utility in the environment in which they operate. The problem of biases in the data used by AI is taken as example, and a small proof of concept with real datasets is provided. Findings: The authors observe that biases measurements on the datasets are sufficient to demonstrate potential risks of discriminations when using those data in AI rational agents. Starting from this example, the authors discuss other open issues connected to AI rational agents and provide a few general ethical principles derived from the White Paper AI at the service of the citizen, recently published by Agid, the agency of the Italian Government which designs and monitors the evolution of the IT systems of the Public Administration. Originality/value: The paper contributes to the scientific debate on the governance and the ethics of AI with a critical analysis of the mainstream definition of AI.
File in questo prodotto:
File Dimensione Formato  
PAPER-AI-from-rational-ag-to-soc-resp-ag-V2-PRE-PRINT.pdf

accesso aperto

Descrizione: Post print AI from rational agents to social responsible agents
Tipologia: 2. Post-print / Author's Accepted Manuscript
Licenza: PUBBLICO - Tutti i diritti riservati
Dimensione 355.54 kB
Formato Adobe PDF
355.54 kB Adobe PDF Visualizza/Apri
10-1108_DPRG-08-2018-0049.pdf

non disponibili

Tipologia: 2a Post-print versione editoriale / Version of Record
Licenza: Non Pubblico - Accesso privato/ristretto
Dimensione 413.28 kB
Formato Adobe PDF
413.28 kB Adobe PDF   Visualizza/Apri   Richiedi una copia
Pubblicazioni consigliate

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11583/2729235
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo