From 2017 until the beginning of this year I was working in SAP’s Global Diversity & Inclusion Office. As part of my role I developed and held trainings for our employees to foster an inclusive work environment where people of all races, gender identities, sexual orientations, age groups, with or without disabilities can thrive. Even though most of us know about our own harmful biases, stereotypes still hold us back from working together in the most efficient ways.
Recently I stumbled across a study from 2018 that came to the conclusion that we even have racial biases towards robots! One might think: “wait a minute, can a robot have a race!?”.
More of an auditory learner? Listen to ideas on Racism and Robots by the Human Robot Interaction Podcast on Spotify
Indeed, robots don’t have a race; but this fact doesn’t stop humans racializing them. The underlying construct is called “anthropomorphism”, which is a complicated word for humans ascribing human characteristics to non-human agents.
As with most mental shortcuts, anthropomorphism saves us time and effort (our brains love that) at the expense of accuracy (one cannot simply assign human characteristics to robots).
To test whether humans ascribe race to robots, the scientists around Christoph Bartneck from University of Canterbury conducted an extended replication of a famous paradigm of social psychology called “the shooter bias paradigm”.
The Shooting Bias Paradigm
In this paradigm, the participants play the role of a police officer who has to decide whether to shoot an individual or not. If the agent is carrying a gun, the correct decision would be “shoot” whereas the participants should refrain from shooting if the agent is holding a harmless object such as a smartphone.
To predict anti-black racial biases (“shooter biases”), the skin tone of the agents was varied across conditions. One can then draw conclusion on implicit tendencies by analyzing the number of correct/false decisions as well as the reaction time of the participants.
The original experiment was now extended by not only using pictures of human agents (carrying guns or harmless objects and varying in skin tone) but also using robot agents. The robots were also racialized; having “white” or “black” outer shells and being photoshoped into the background of the original experiment.
Results: Racial Bias against Robots
Findings of the extended replication experiment with robots show that people decide faster to shoot an armed black agent than an armed white agent. In the same line, participants were faster in deciding not to shoot an unarmed white agent vs. an unarmed black agent. These findings do not only indicate that people assigned a race to the robots but also that the same race-related tendencies, biases and prejudices extend to robots.
“These findings illustrate the shooter bias towards both human and robot agents. This bias is both a clear indication of racism towards black people, as well as the automaticity of its extension to robots racialized as black.”– Christoph Bartneck in an interview with IEEE Spectrum
Implications: What can we do about it?
The findings are highly problematic considering that the vast majority of produced robots appear to be “white” (having white surfaces). Ethical questions especially arise in the area of social robotics where we rely on close human-robot interaction. That’s why the authors call for more “racial” diversity in the design of robots to prevent that we replicate problematic outcomes such as discrimination.
“There is no need for all robots to be white”– Christoph Bartneck in an interview with IEEE Spectrum
Study: Bartneck, C., Yogeeswaran, K., Ser, Q. M., Woodward, G., Sparrow, R., Wang, S., & Eyssel, F. (2018, February). Robots and racism. In Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction (pp. 196-204). ACM.
Original Shooter Bias Paradigm: Joshua Correll, Bernadette Park, Charles M. Judd, and Bernd Wittenbrink. 2002. The police officer’s dilemma: Using ethnicity to disambiguate potentially threatening individuals. Journal of Personality and Social Psychology 83, 6 (2002), 1314–1329.