Artificial Intelligence has, in recent weeks and months, dominated global news and has been used in basic daily life activities. AI is the ability of machines to display functions like learning, planning, reasoning and creativity.
Over the years, AI has come under fire from people of color, citing bias and racial discrimination. This is what piqued the interest of Deborah Raji, a Mozilla fellow and CS PhD student at the University of California, Berkeley with an interest in algorithmic auditing and evaluation.
While interning at the machine-learning startup Clarifai after her third year of college in 2017, Raji worked on a computer vision model that would help clients flag inappropriate images as “not safe for work.” However, she found that it flagged photos of people of color at a much higher rate than those of white people. She attributed the imbalance to the consequence of the data training. The model was learning to recognize NSFW imagery from porn and safe imagery from stock photos, according to Innovators Under 30.
Porn, it appears, is much more diverse, and that diversity was causing the model to automatically associate dark skin with indecent content. When she told Clarifai about the problem, the company did not yield to her.
“It was very difficult at that time to really get people to do anything about it,” she recalled. “The sentiment was ‘It’s so hard to get any data. How can we think about diversity in data?’”
Raji did not back down. She continued to investigate further, exploring mainstream data sets for training computer vision. Her exploration continued to reveal upsetting demographic imbalances as many data sets of faces lacked dark-skinned ones. This led to face recognition systems that couldn’t accurately differentiate between such faces. And these systems were relied upon heavily by Police departments and law enforcement agencies at the time.
“That was the first thing that really shocked me about the industry. There are a lot of machine-learning models currently being deployed and affecting millions and millions of people,” she said, “and there was no sense of accountability.”
This led Raji to shift her focus away from the startup world and toward AI research, focusing on “how AI companies could ensure that their models do not cause undue harm—especially among populations that are likely to be overlooked during the development process,” she told TIME.
“It became clear to me that this is really not something that people in the field are even aware is a problem to the extent that it is,” she said to the outlet.
She is now more focused on building methods to audit AI systems both within and outside of the companies creating them. She has also worked with Google’s Ethical AI and collaborated with the Algorithmic Justice League on its Gender Shades audit project. That project “evaluated the accuracy of AI-powered gender-classification tools created by IBM, Microsoft, and Face++,” Raji told TIME.
Her impressive work in the AI field saw her being honored as one of the inaugural members of Time Magazine’s 100 list of the most influential people in Artificial Intelligence (AI). She was placed in the ‘thinkers’ category.
Raji was born in Port Harcourt, Nigeria, but moved to Mississauga, Ontario, when she was four years old. According to her, her family left Nigeria to escape its instability and give her and her siblings a better life.
Her family eventually settled in Ottawa, where she applied to college. At the time, her interest was in pre-med studies as her family wanted her to become a medical doctor. She was accepted into McGill University as a neuroscience major but during a visit to the University of Toronto, she met a professor who persuaded her to study engineering.
She took her first coding class and quickly found herself in the world of hackathons. Soon, she realized she could turn her ideas into software that could help solve problems or change systems.