In an increasingly complex everyday life, algorithms—often learned from data, i.e., machine learning (ML)—are used to make or assist with operational decisions. However, developers and designers usually are not entirely aware of how to reflect on social justice while designing ML algorithms and applications. Algorithmic social justice—i.e., designing algorithms including fairness, transparency, and accountability—aims at helping expose, counterbalance, and remedy bias and exclusion in future ML-based decision-making applications. How might we entice people to engage in more reflective practices that examine the ethical consequences of ML algorithmic bias in society? We developed and tested a design-fiction-driven methodology to enable multidisciplinary teams to perform intense, workshop-like gatherings to let potential ethical issues emerge and mitigate bias through a series of guided steps. With this contribution, we present an original and innovative use of design fiction as a method to reduce algorithmic bias in co-design activities.

Reflecting on Algorithmic Bias With Design Fiction: The MiniCoDe Workshops

Tommaso Turchi
;
Alessio Malizia;
2024-01-01

Abstract

In an increasingly complex everyday life, algorithms—often learned from data, i.e., machine learning (ML)—are used to make or assist with operational decisions. However, developers and designers usually are not entirely aware of how to reflect on social justice while designing ML algorithms and applications. Algorithmic social justice—i.e., designing algorithms including fairness, transparency, and accountability—aims at helping expose, counterbalance, and remedy bias and exclusion in future ML-based decision-making applications. How might we entice people to engage in more reflective practices that examine the ethical consequences of ML algorithmic bias in society? We developed and tested a design-fiction-driven methodology to enable multidisciplinary teams to perform intense, workshop-like gatherings to let potential ethical issues emerge and mitigate bias through a series of guided steps. With this contribution, we present an original and innovative use of design fiction as a method to reduce algorithmic bias in co-design activities.
2024
Turchi, Tommaso; Malizia, Alessio; Borsci, Simone
File in questo prodotto:
File Dimensione Formato  
Reflecting_on_Algorithmic_Bias.pdf

accesso aperto

Tipologia: Versione finale editoriale
Licenza: Creative commons
Dimensione 830.02 kB
Formato Adobe PDF
830.02 kB Adobe PDF Visualizza/Apri

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11568/1232618
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 3
  • ???jsp.display-item.citation.isi??? 3
social impact