avatarAllison Wiltz

Summarize

RACISM IN TECHNOLOGY

AI is Horrible at Identifying Black Faces. Here's Why That Matters

A case of mistaken identity previews a dangerous future

Close-up shot of Black bearded man covering his face | Photo by Styves Exantus via Pexels

Artificial intelligence is the wave of the future, but whether that wave will help future generations surf through life with grace like Nick Gabaldon or crash and burn is yet to be determined. One thing is for sure, AI, a technology designed to mimic human critical thinking and problem-solving skills, has already changed the world. For instance, scientists suggest that when used appropriately, artificial intelligence cuts down on menial tasks and enables workers to focus on "more engaging" and "less repetitive new roles."

Artificial intelligence has also been credited with breaking down language barriers by making translation more affordable and available to a broader population. In the healthcare field and government agencies, artificial intelligence has helped to create systems where clients can quickly get answers to common questions. And we've seen how artificial intelligence can create beautifully complex works of art, though there is some debate about whether computer-generated images count as art. However, despite the tremendous technological advances artificial intelligence brings, we must recognize the risks of using AI in law enforcement.

"So, what's the harm," some of you may be thinking. "Don't we want a more effective and less-biased criminal justice system?" Of course, our society should have a fair criminal justice system that doles out punishments equitably, makes fair, level-headed decisions, and seeks to reform and repair society, but it doesn't. According to the Prison Policy Initiative, Black people are stopped disproportionately by police and are more likely to receive harsh sentences and endure solidarity confinement compared to White people.

In May 2016, Pro-Publica published "Machine Bias," providing insight into how court systems around the country use artificial intelligence to predict the likelihood of a defendant committing a crime if released, a phenomenon called recidivism. To make the decision, COMPRAS "takes into account factors such as previous arrests, age, and employment" and then produces a risk assessment. The results are also "given to judges during criminal sentencing" to help them decide how long someone should be incarcerated, whether they should receive the opportunity to post bail pending trial, and how much they should pay.

The formula COMPRAS used was found by ProPublica to "falsely flag Black defendants as future criminals, wrongly labeling them this way at almost twice the rate as white defendants." On the other hand, White defendants who the artificial intelligence labeled as "low risk" were much more likely to re-offend than Black defendants. So, while using an algorithm to make decisions about bail or sentencing sounds fair, Black defendants got the short end of the stick when the criminal justice system relied on input from artificial intelligence to make the call.

Using artificial intelligence, authorities linked Randall Reid, a 28-year-old Black man from Georgia to thefts committed in Jefferson Parish and Baton Rouge and arrested him on November 25th last year. The problem was that Reid had never even been to Louisiana or understood where "Jefferson Parish" was. For context, Alanah Odoms, the executive director of the ACLU of Louisiana, said: "that the Jefferson Parish Sherriff's Office has a deep-rooted history of racial discrimination and cruelty towards residents of color." For example, in the parish, "a Black person is 11 times more likely to be killed by police than a White person." Because of this systemic discrimination, most Black people outside of Louisiana have little incentive to visit Jefferson Parish, despite it sharing a border with New Orleans.

‘They told me I had a warrant out of Jefferson Parish. I said, ‘What is Jefferson Parish?’ Reid said. “I have never been to Louisiana a day in my life.’

Authorities released Reid on December 1st and rescinded the warrant, with Jefferson Parish Sherrif Calogero estimating "a 40-pound difference between Reid and the purse thief in surveillance footage." Nevertheless, the algorithm falsely flagged a Black man as guilty of theft, leading to his wrongful arrest and detainment. If you don't see the danger artificial intelligence poses to the Black community when used in the criminal justice system, you're not paying attention.

According to Wired, even "the best algorithms struggle to recognize Black faces equally" and "misidentify Blacks at rates five to 10 times higher than they do Whites." So, the more law enforcement and courts depend on artificial intelligence to identify and assess Black citizens and defendants, the more cases of mistaken identity we’re likely to hear about. And in a system that already disproportionately punishes Black people, who, by the way, are 7.5 times as likely to be wrongly convicted, it's dangerous to take humanity out of our systems. I won't say all artificial intelligence is horrible because it can help make our lives easier in the right circumstances. However, there is danger in treating artificial intelligence like a magic pill that can create a more equitable criminal justice system, especially since the current algorithms have been shown to perpetuate anti-Black racial bias. Right now, artificial intelligence is horrible at identifying Black faces accurately, and that matters because cases of mistaken identity can jeopardize the safety and liberty of Black people.

🌹Learn more about the author here. 🖊Sign up to read all my stories and thousands more.

Racism
BlackLivesMatter
Artificial Intelligence
Culture
Life
Recommended from ReadMedium