The categorical imperative as an algorithm. Kant and machines ethics Machine ethics can be defined as the attempt to identify a method to construct artificial moral agents. Such an attempt is based on the metaethical assumption that ethics can be translated into computational terms. Kant’s moral doctrine was very early identified as one of the reference models for this attempt, due to three characteristics: a cognitive and rationalist metaethics, the denial of the existence of mutually alternative ethics and of moral dilemmas and the elaboration of a logical criterion for moral judgment. With the universalisation test, Kant presents a formulation of the categorical imperative which is based on the principle of non-contradiction. It thus seems that this test could be translated into an algorithm and transformed into a procedure that can be executed by a computer system. The paper aims to show that the Kantian test cannot be translated into an algorithm and cannot therefore provide machine ethics with a criterion for moral judgment. Although the categorical imperative is untranslatable in computational terms, it is nevertheless appropriate for machine ethics to start from Kant, from a normative, a metaethical and a methodological point of view: Kant’s idea of man as an end in himself is incorporated in the constitutional laws of many contemporary States and constitutes a constraint on the programming of intelligent systems. Kantian doctrine can also be a warning against the tendency to think of machines as agents in the same sense that human beings, rather than as things with the ability to follow programmed or learned rules. Finally, Kant’s moral doctrine should also be considered a reminder that a moral agent has, above all, the ability to set himself goals and to represent a norm, to respect it or to violate it. Ongoing research on these issues can make an important contribution to governing those systems in which artificial agents operate, and make decisions, alongside human beings.

L’imperativo categorico come algoritmo. Kant e l’etica delle macchine

Tafani Daniela
2021-01-01

Abstract

The categorical imperative as an algorithm. Kant and machines ethics Machine ethics can be defined as the attempt to identify a method to construct artificial moral agents. Such an attempt is based on the metaethical assumption that ethics can be translated into computational terms. Kant’s moral doctrine was very early identified as one of the reference models for this attempt, due to three characteristics: a cognitive and rationalist metaethics, the denial of the existence of mutually alternative ethics and of moral dilemmas and the elaboration of a logical criterion for moral judgment. With the universalisation test, Kant presents a formulation of the categorical imperative which is based on the principle of non-contradiction. It thus seems that this test could be translated into an algorithm and transformed into a procedure that can be executed by a computer system. The paper aims to show that the Kantian test cannot be translated into an algorithm and cannot therefore provide machine ethics with a criterion for moral judgment. Although the categorical imperative is untranslatable in computational terms, it is nevertheless appropriate for machine ethics to start from Kant, from a normative, a metaethical and a methodological point of view: Kant’s idea of man as an end in himself is incorporated in the constitutional laws of many contemporary States and constitutes a constraint on the programming of intelligent systems. Kantian doctrine can also be a warning against the tendency to think of machines as agents in the same sense that human beings, rather than as things with the ability to follow programmed or learned rules. Finally, Kant’s moral doctrine should also be considered a reminder that a moral agent has, above all, the ability to set himself goals and to represent a norm, to respect it or to violate it. Ongoing research on these issues can make an important contribution to governing those systems in which artificial agents operate, and make decisions, alongside human beings.
2021
Tafani, Daniela
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1103900
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 0
  • ???jsp.display-item.citation.isi??? ND
social impact