Georgia Tech, Johns Hopkins University shows race and gender bias in artificial intelligence

In this March 2016, file photo, a pedestrian walks through the Georgia Tech campus as the downtown Atlanta skyline looms in the background. (AP Photo/David Goldman)

A study conducted by institutions including the Georgia Institute of Technology and Johns Hopkins University has shown racist and sexist prejudices in a popular artificial intelligence system. 

The AI, which synthesized public information as a means for data gathering, showed clear bias against marginalized groups, such as people of color and women.

In the experiment, researchers tracked the frequency at which the AI assigned titles such as “doctor,” “criminal” and “homemaker” to individuals of varying genders and races. Results showed the tendency for it to identify women as a “homemaker” over white men, identify Black men as “criminals” 10% more than white men and identify Latino men as “janitors” 10% more than white men, according to a Georgia Tech College of Computing publication. Women of all ethnicities were also less likely to be identified as a “doctor” than white men.

“One of the big sources of change I think we can make is looking at the human process of developing these technologies,” Dr. Andrew Hundt, the study’s co-author and a computing innovation postdoctoral fellow at Georgia Tech, said on Wednesday’s edition of “Closer Look.” “We need to start by assuming there is going to be identity bias of some kind or another — be it race, gender, LGBTQ+ identity [or] national origin … You need to prove you’ve addressed it before it goes out and quantify those issues.”

As artificial intelligence software becomes increasingly integrated into everyday life through vehicles of task automation like facial recognition and social media, the research team has resolved to shed light on the role of ethics in technological developments.

“When you take all of this data from the internet and [process] the model without carefully filtering or considering all of these consequences, you end up with stereotypes,” Vicky Zeng, who also co-authored the study and is a PhD student studying computer science at Johns Hopkins University. “You [risk] making this into explicit behavior done by a robot with no human intervention.”