Even black robots are impacted by racism

Humans are likely to perceive an anthropomorphic robot to have race–and they bring their race-related prejudices with them, a new study suggests.

[Photo: Flickr user Ricardo Diaz]

To varying extents, we humans are familiar with humans’ racist tendencies toward one another. We may not be as aware that we can be racist toward robots too. A new study suggests that if robots have anthropomorphic features like eyes or a face, people will often look at the color of the machine and, if asked to, assign the robot a racial category. And when asked to respond to a threatening robot, humans are quicker to shoot black robots than white ones.


The researchers collected photos of people of different races and Nao, a humanoid robot, and changed the color of the robot’s shell to a variety of human skin tones. Their experimental setup relied on the “shooter bias” procedure, which has participants playing the role of a police officer who has to decide if they should or shouldn’t shoot their gun when shown different images. Those photos had a person or Nao in it, either holding a weapon in their hand or some other, benign object. The study subjects saw the picture for only a split second and were asked to act on instinct.

Example stimuli used in the experiment. Top row is robot and human holding an object, bottom row is robots and humans holding a gun. The left four images are our new robot photos while the right four images are the original images. [Image: courtesy of Christopher Bartneck]

The study found that the participants were faster to shoot an armed black human and robot than they were to shoot their white counterparts. The study subjects–the majority of whom identified as white–were also faster to refrain from shooting unarmed white humans and robots than unarmed black figures.

Christoper Bartneck [Photo: courtesy of Christopher Bartneck]

“The level of agreement amongst participants when it came to their explicit attributions of race was especially striking,” the researchers write. “Participants were able to easily and confidently identify the race of robots according to their racialization and their performance in the shooter bias task was informed by such social categorization processes. Thus, there is also a clear sense in which these robots–and by extension other humanoid robots–do have race.”

These results have troubling implications for the future. Many of the robots out in the world today are white, which the researchers point out look like the people building them, but not necessarily using them. One day soon these machines will greatly expand their roles in the world, and work with humans as colleagues, assistants, caretakers, and any number of roles. Having mostly white robots in these jobs could reinforce racism and expand bias.

“If robots are supposed to function as teachers, friends, or carers, for instance, then it will be a serious problem if all of these roles are only ever occupied by robots that are racialized as white,” say the researchers. Their results were presented at the ACM/IEEE International Conference on Human Robot Interaction in March.

Christoph Bartneck, the lead author of the study and professor at the Human Interface Technology Lab at the University of Canterbury in New Zealand, said he wants the research to inspire roboticists to create bots that reflect their communities. “There is no need for all robots to be white,” Bartneck told IEEE Spectrum.


Racism could exacerbate the ways we already behave around robots as they become more mainstream. Abusing robots is widespread, like kicking six-foot-tall robots that roam some Walmarts to knocking over pint-size delivery bots. In fact, the problem is so pervasive that South Korean researchers are working on a project to teach children the right way to interact with robots.

Of course, robots aren’t perfect either. As a growing body of research has shown, the algorithms we design aren’t impartial arbiters leading to fair outcomes but are rife with racist and sexist biases too. That’s especially worrisome for applications like face recognition, criminal sentencing, or determining an assortment of risk scores. Barneck’s study is a reminder that our biases easily carry over to the way we perceive robots, but they also carry over to the way the robots perceive us too.

About the author

Jackie Snow is a multimedia journalist published in or on National Geographic, Vanity Fair, The Atlantic, Quartz, New York Observer and more.