Driverless car systems have a bias problem, according to a new study from Kings College London. The study examined eight AI-powered pedestrian detection systems used for autonomous driving research. Researchers ran more than 8,000 images through the software and found that the self-driving car systems were nearly 20% better at detecting adult pedestrians than kids, and more than 7.5% better at detecting light-skinned pedestrians over dark-skinned ones. The AI were even worse at spotting dark-skinned people in low light and low settings, making the tech even less safe at night.
For children and people of color, crossing the street could get more dangerous in the near future.
“Fairness when it comes to AI is when an AI system treats privileged and under-privileged groups the same, which is not what is happening when it comes to autonomous vehicles,” said Dr. Jie Zhang, one of the study authors, in a press release. “Car manufacturers don’t release the details of the software they use for pedestrian detection, but as they are usually built upon the same open-source systems we used in our research, we can be quite sure that they are running into the same issues of bias.”
The study didn’t test the exact same software used by driverless car companies that already have their products on the streets, but it adds to growing safety concerns as the cars become more common. This month, the California state government gave Waymo and Cruise free range to operate driverless taxis in San Francisco 24-hours a day. Already, the technology is causing accidentsand sparking protests in the city.
Gizmodo reached out to several companies best known for self-driving cars. Cruise and Tesla did not respond to requests for comment.
A Waymo spokesperson said the study doesn’t represent all of the tools used in the company’s cars. “At Waymo, we don’t just use camera images to detect pedestrians,” said Sandy Karp, a Waymo spokesperson. “Instead, we tap into our full sensor suite — including our lidars and radars, not just cameras — to help us actively sense details in our surroundings in a way that would be difficult to do with cameras alone.”
According to the researchers, a major source of the technology’s problems with kids and dark-skinned people comes from bias in the data used to train the AI, which contains more adults and light-skinned people.
Karp said Waymo trains its autonomous driving technology to specifically classifies humans and respond to human behavior, and words to make sure its data sets are representative.
Algorithms reflect the biases present in datasets and the minds of the people who create them. One common example is facial recognition software, which consistently demonstrates less accuracy with the faces of women, dark-skinned people, and Asian people, in particular. These concerns haven’t stopped the enthusiastic embrace of this kind of AI technology. Facial recognition is already responsible for putting innocent Black people in jail.
Update, August 24th, 1:45 p.m.: This article has been updated with comments from Waymo.