Photo of Inioluwa Deborah Raji

Artificial intelligence & robotics

Inioluwa Deborah Raji

Her research on racial bias in data used to train facial recognition systems is forcing companies to change their ways.

Year Honored
2020

Organization
AI NOW INSTITUTE

Region
Global

Hails From
Nigeria

The spark that sent Inioluwa Deborah Raji down a path of artificial-intelligence research came from a firsthand realization that she remembers as “horrible.”

Raji was interning at the machine--learning startup Clarifai after her third year of college, working on a computer vision model that would help clients flag inappropriate images as “not safe for work.” The trouble was, it flagged photos of people of color at a much higher rate than those of white people. The imbalance, she discovered, was a consequence of the training data: the model was learning to recognize NSFW imagery from porn and safe imagery from stock photos—but porn, it turns out, is much more diverse. That diversity was causing the model to automatically associate dark skin with salacious content.

Though Raji told Clarifai about the problem, the company continued using the model. “It was very difficult at that time to really get people to do anything about it,” she recalls. “The sentiment was ‘It’s so hard to get any data. How can we think about diversity in data?’”

The incident pushed Raji to investigate further, looking at mainstream data sets for training computer vision. Again and again, she found jarring demographic imbalances. Many data sets of faces lacked dark-skinned ones, for example, leading to face recognition systems that couldn’t accurately differentiate between such faces. Police departments and law enforcement agencies were then using these same systems in the belief that they could help identify suspects.      

“That was the first thing that really shocked me about the industry. There are a lot of machine-learning models currently being deployed and affecting millions and millions of people,” she says, “and there was no sense of accountability.”

Born in Port Harcourt, Nigeria, Raji moved to Mississauga, Ontario, when she was four years old. She remembers very little of the country she left other than the reason for leaving: her family wanted to escape its instability and give her and her siblings a better life. The transition proved tough. For the first two years, Raji’s father continued to work in Nigeria, flying back and forth between two continents. Raji attended seven different schools during their first five years in Canada.

Eventually, the family moved to Ottawa and things began to stabilize. By the time she applied to college, she was sure she was most interested in pre-med studies. “I think if you’re a girl and you’re good at science, people tell you to be a doctor,” she says. She was accepted into McGill University as a neuroscience major. Then, on a whim,  and with her father’s encouragement, she visited the University of Toronto and met a professor who persuaded her to study engineering. “He was like, ‘If you want to use physics and you want to use math to build things that actually create impact, you get to do that in this program,’” she remembers. “I just fell for that pitch and overnight changed my mind.”

It was at university that Raji took her first coding class and quickly got sucked into the world of hackathons. She loved how quickly she could turn her ideas into software that could help solve problems or change systems. By her third year, she was itching to join a software startup and experience this in the real world. And so she found herself, a few months into her internship at Clarifai, searching for a way to fix the problem she had discovered. Having tried and failed to get support internally, she reached out to the only other researcher she knew of who was working on fighting bias in computer vision.

In 2016, MIT researcher Joy Buolamwini (one of MIT Technology Review’s 35 Innovators Under 35 in 2018) gave a TEDx talk about how commercial face recognition systems failed to detect her face unless she donned a white mask. To Raji, Buolamwini was the perfect role model: a black female researcher like herself who had successfully articulated the same problem she had identified. She pulled together all her code and the results of her analyses and sent Buolamwini an unsolicited email. The two quickly struck up a collaboration.

At the time, Buolamwini was already working on a project for her master’s thesis, called Gender Shades. The idea was simple yet radical: to create a data set that could be used to evaluate commercial face recognition systems for gender and racial bias. It wasn’t that companies selling these systems didn’t have internal auditing processes, but the testing data they used was as demographically imbalanced as the training data the systems learned from. As a result, the systems could perform with over 95% accuracy during the audit but have only 60% accuracy for minority groups once deployed in the real world. By contrast, Buolamwini’s data set would have images of faces with an even distribution of skin color and gender, making it a more comprehensive way to evaluate how well a system recognizes people from different demographic groups. 

Raji joined in the technical work, compiling the new data set and helping Buolamwini run the audits. The results were shocking: among the companies they tested—Microsoft, IBM, and Megvii (the company best known for making the software Face++)—the worst identified the gender of dark-skinned women 34.4% less accurately than that of light-skinned men. The other two didn’t do much better. The findings made a headline in the New York Times and forced the companies to do something about the bias in their systems.

Gender Shades showed Raji how auditing could be a powerful tool for getting companies to change. So in the summer of 2018, she left Clarifai to pursue a new project with Buolamwini at the MIT Media Lab, which would make its own headlines in January 2019. This time Raji led the research. Through interviews at the three companies they’d audited, she saw how Gender Shades had led them to change the ways they trained their systems in order to account for a greater diversity of faces. She also reran the audits and tested two more companies: Amazon and Kairos. She found that whereas the latter two had egregious variations in accuracy between demographic groups, the original three had dramatically improved.

The findings made a foundational contribution to AI research. Later that year, the US National Institute of Standards and Technology also updated its annual audit of face recognition algorithms to include a test for racial bias.

Raji has since worked on several other projects that have helped set standards for algorithmic accountability. After her time at the Media Lab, she joined Google as a research mentee to help the company make its AI development process more transparent. Whereas traditional software engineers have well-established practices for documenting the decisions they make while building a product, machine-learning engineers at the time did not. This made it easier for them to introduce errors or bias along the way, and harder to check such mistakes retroactively.

Along with a team led by senior research scientist Margaret Mitchell, Raji developed a documentation framework for machine-learning teams to use, drawing upon her experience at Clarifai to make sure it would be easy to adhere to. Google rolled out the framework in 2019 and built it into Google Cloud for its clients to use. A number of other companies, including OpenAI and natural-language processing firm Hugging Face, have since adopted similar practices.

Raji also co-led her own project at Google to introduce internal auditing practices as a complement to the external auditing work she did at the Media Lab. The idea: to create checks at each stage of an AI product’s development so problems can be caught and dealt with before it is put out into the world. The framework also included advice on how to get the support of senior management, so a product would indeed be held back from launching if it didn’t pass the audits.

With all her projects, Raji is driven by the desire to make AI ethics easier to practice—“to take the kind of high-level ethical ideals that we like to talk about as a community and try to translate that into concrete actions, resources, and frameworks,” she says.

It hasn’t always been easy. At Google, she saw how much time and effort it took to change the way things were done. She worries that the financial cost of eliminating a problem like AI bias deters companies from doing it. It’s one reason she has moved back out of industry to continue her work at the nonprofit research institute AI Now. External auditing, she believes, can still hold companies accountable in ways that internal auditing can’t.

But Raji remains hopeful. She sees that AI researchers are more eager than ever before to be more ethical and more responsible in their work. “This is such impactful technology,” she says. “I just really want us to be more thoughtful as a field as to how we build these things, because it does matter and it does affect people.”