Is Artificial Intelligence Racial Bias Being Suppressed?

0
8
Is Artificial Intelligence Racial Bias Being Suppressed?

Artificial Intelligence (AI) and Machine Learning are used to power a variety of important modern software technologies. For instance, AI powers analytics software, Google’s bugspot tool, and code compilers for programmers. AI also powers the facial recognition software commonly used by law enforcement, landlords, and private citizens.

Of all the uses for AI-powered software, facial recognition is a big deal. Security teams from large buildings that rely on video surveillance – like schools and airports – can benefit greatly from this technology. An AI algorithm has the potential to detect a known criminal or an unauthorized person on the property. Some systems can identify guns while others can track each individual’s movements and provide a real-time update regarding their location with a single click.

Facial recognition software has phenomenal potential

Police in the U.S have used facial recognition software to successfully identify mass shooting suspects. Police in New Delhi, India, used this tech to identify close to 3,000 missing children in four days. AI-powered software scanned 45,000 photos of children living in orphanages and foster homes and matched 2,930 kids to photos in the government’s lost child database. That’s an impressive success rate.

Facial recognition software is also used by governments to help refugees find their families through the online database called REFUNITE. This database combines data from multiple agencies and allows users to perform their own searches.

Despite the potential, AI-powered software is biased

Facial recognition software is purported to enhance public safety since AI algorithms can be more accurate than the human eye. However, that’s only true when you’re a white male. The truth is, Artificial Intelligence algorithms have an implicit bias toward women and people with dark skin. That bias is present in two major types of software: facial recognition software and risk assessment software.

For instance, researchers from MIT’s Media Lab used facial recognition software in an experiment that misidentified dark-skinned females as men up to 35% of the time. Both women and people with dark skin had the highest error rates.

Another area of bias is seen in risk assessments. Some jails use a computer program to predict the likelihood of each inmate committing a crime in the future. Unfortunately, time has already shown these assessments are biased toward people with dark skin. Dark-skinned people are generally scored as a higher risk than light-skinned people. The problem is that risk assessment scores are used by authorities to inform decisions as a person moves through the criminal justice system. Judges frequently use these scores to determine bond amounts and whether a person should receive parole.

In 2014, U.S Attorney General Eric Holder called for the U.S. Sentencing Commission to study the use of risk assessment scores because he saw the potential for bias. The commission chose not to study risk scores. However, an independent, nonprofit news organization called ProPublica studied the scores and found them to be remarkably unreliable in forecasting violent crime. They studied more than 7,000 people in Broward County, Florida and found that only 20% of people predicted to commit violent crimes actually did.

This bias has been known for quite some time, yet experts have yet to create a solution. People wouldn’t be so alarmed at the error rate if the technology wasn’t already in use by governments and police.

The ACLU concluded facial recognition software used by police is biased

In 2018, the American Civil Liberties Union (ACLU) ran a test to see if Amazon’s facial recognition software used by police has a racial bias. The results? Twenty-

Read More

Leave a reply