Understanding how people explain and understand the behavior of robots is essential to make sense of how they interact with them. This general consideration is acknowledged by the robotics community (see for example Ziemke 2020). More specifically, experimental psychologists as well as psychologically minded roboticists explore how human-robot interaction dynamics change depending on whether humans adopt an intentional stance towards robots or not (see for example Perez-Osorio and Wykowska 2020), and tools to analyze adoption of an intentional stance have been developed (Marchesi et al., 2019). There are some gaps that need to be filled in this rich and emerging research landscape. First, the intentional stance does not exhaust the range of the possible stances one may adopt towards robots. Second, it may be interesting, eventually, to shift from the study of the mere stances adopted by people towards robots to the finer-grained study of their patterns of explanation, drawing from the extensive philosophical literature on the structure of scientific and ordinary explanations. Within this broader research horizon, this work will address the first point. The goal of this work is to analyze and critically reflect on the so-called “depiction” conception of social robots, recently proposed by Clark and Fischer (2022). According to this model, people construe social robots not as social agents per se, but as depictions of social agents, interpreting them much as they interpret ventriloquist dummies, hand puppets, cartoon characters. The depiction model was proposed by the authors to explain apparently contradicting behaviors shown by people when they interact with social robots – more specifically, the fact that people know that robots are mere machines, yet they communicate with them as if they were alive. One specific goal of this work is to critically analyze the depiction model, focusing on the three-layered structure that constitute its backbone and on the functions that connect different layers. A second specific goal is to position the depiction model within a spectrum of possible ontological attitudes towards social robots and their mind, which include realism, fictionalism, instrumentalism, and eliminativism. Based on Toon (2016), it will be argued that Clark and Fischer’s model is closer to fictionalism. The relationship with instrumentalism will be explored comparing the depiction model with Dennett’s intentional stance theory and showing that the two approaches differ from one another on whether the robot is assumed to be rational or not. A third, final goal, is to briefly reflect on whether the two theories are empirically compatible: can they “save the same phenomena”? Can the experimental procedures and tools currently used by the robotics community to investigate humans’ adoption of the intentional theory exclude a depiction-based interpretation? Is it possible to formulate experiments that discriminate between depiction-based explanations of robot behavior and explanations à la Dennett?

Social robots as depictions of social agents: Explanatory import and ontological attitudes

G. Pisaneschi;E. Datteri
2022-01-01

Abstract

Understanding how people explain and understand the behavior of robots is essential to make sense of how they interact with them. This general consideration is acknowledged by the robotics community (see for example Ziemke 2020). More specifically, experimental psychologists as well as psychologically minded roboticists explore how human-robot interaction dynamics change depending on whether humans adopt an intentional stance towards robots or not (see for example Perez-Osorio and Wykowska 2020), and tools to analyze adoption of an intentional stance have been developed (Marchesi et al., 2019). There are some gaps that need to be filled in this rich and emerging research landscape. First, the intentional stance does not exhaust the range of the possible stances one may adopt towards robots. Second, it may be interesting, eventually, to shift from the study of the mere stances adopted by people towards robots to the finer-grained study of their patterns of explanation, drawing from the extensive philosophical literature on the structure of scientific and ordinary explanations. Within this broader research horizon, this work will address the first point. The goal of this work is to analyze and critically reflect on the so-called “depiction” conception of social robots, recently proposed by Clark and Fischer (2022). According to this model, people construe social robots not as social agents per se, but as depictions of social agents, interpreting them much as they interpret ventriloquist dummies, hand puppets, cartoon characters. The depiction model was proposed by the authors to explain apparently contradicting behaviors shown by people when they interact with social robots – more specifically, the fact that people know that robots are mere machines, yet they communicate with them as if they were alive. One specific goal of this work is to critically analyze the depiction model, focusing on the three-layered structure that constitute its backbone and on the functions that connect different layers. A second specific goal is to position the depiction model within a spectrum of possible ontological attitudes towards social robots and their mind, which include realism, fictionalism, instrumentalism, and eliminativism. Based on Toon (2016), it will be argued that Clark and Fischer’s model is closer to fictionalism. The relationship with instrumentalism will be explored comparing the depiction model with Dennett’s intentional stance theory and showing that the two approaches differ from one another on whether the robot is assumed to be rational or not. A third, final goal, is to briefly reflect on whether the two theories are empirically compatible: can they “save the same phenomena”? Can the experimental procedures and tools currently used by the robotics community to investigate humans’ adoption of the intentional theory exclude a depiction-based interpretation? Is it possible to formulate experiments that discriminate between depiction-based explanations of robot behavior and explanations à la Dennett?
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1204198
 Attenzione

Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo

Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus ND
  • ???jsp.display-item.citation.isi??? ND
social impact