avatarAllison Wiltz

Summary

Facial recognition software has become a controversial tool for law enforcement due to its inherent racial biases, leading to discriminatory practices and raising significant ethical and civil rights concerns.

Abstract

The adoption of facial recognition software by law enforcement agencies has marked a significant technological advancement, yet it has also introduced a new era of racial discrimination. Initially perceived as a breakthrough for crime-solving, the technology has been shown to disproportionately misidentify Black and Asian individuals, perpetuating systemic inequities within the criminal justice system. Civil rights activists argue that the use of such biased technology undermines efforts for criminal justice reform and reflects broader societal biases. The reliance on algorithms in policing has weakened accountability, as officers can deflect responsibility for racial profiling onto the technology. Despite the revelation of bias in these systems, Big Tech companies have been slow to address the issues, prioritizing market interests over ethical considerations. The discriminatory impact of facial recognition software is not limited to law enforcement but extends to other areas such as travel, where it exacerbates racial profiling, particularly affecting Black women. The article calls for a reevaluation of technological tools in the context of systemic racism, urging for the inclusion of diverse perspectives in tech development and advocating for the protection of civil liberties in the age of artificial intelligence.

Opinions

  • The author suggests that facial recognition software, rather than being a neutral tool, reflects and amplifies societal biases, particularly against Black and Asian individuals.
  • There is a strong opinion that law enforcement's use of facial recognition technology is antithetical to the pursuit of justice and accountability, as it perpetuates racial discrimination.
  • The article criticizes Big Tech companies for their reluctance to acknowledge and rectify the racial biases present in their facial recognition products.
  • The author emphasizes the need for criminal justice reform that confronts systemic inequities and cautions against the unchecked use of technology in policing.
  • The piece advocates for increased representation in the tech industry to ensure that the development of AI technologies, such as facial recognition software, is inclusive and fair.
  • The author posits that the current use of facial recognition by law enforcement is a step backward in the fight for civil rights and calls for a more critical approach to embracing new technologies.
  • The article implies that the government's use of biased facial recognition software may violate The Civil Rights Act of 1964 by engaging in discrimination based on race, color, sex, religion, or national origin.

How Facial Recognition Software Became a Disastrous Tool For Law Enforcement

Assessing the technological advancement of racial discrimination

Photo Credit | E & T Magazine

During middle school, our librarian warned against judging a book by its cover. She motivated us to study the contents before assessing the value of the text. While Americans like to think of themselves as reasonable curators, they often judge people’s appearance instead of their character.

The development of facial recognition software in the 1960s marked the dawn of a new age. Initially, people considered it more akin to science fiction than a technological breakthrough. Scientists continued to roll the ball forward. As technology advanced, the market showed interest; in 2014, law enforcement agencies began using mobile facial recognition software. With little fanfare, they revolutionized law enforcement. Officers would no longer need to rely on their ability to recognize a suspect. Instead, they could scan someone’s face and see what comes up in the database. While most Americans rejoice at the release of new technology, they often fail to have substantive conversations about the ethical implications.

During an era where many Black people advocate for criminal justice reform, police departments placed a wedge between themselves and accountability. Advocates want discriminatory practices like racial profiling to diminish. Using facial recognition software shows that police departments are not listening. Within the criminal justice system, facial recognition software attempts to test an individual’s age, sex, and race.

For Americans to elevate, they must engage in restorative justice, addressing a long history of systemic inequities. Still, that is only part of the story. When we discuss racism, we typically refer to its harmful effects in the past or the present. While modern civil rights activists should continue to wage that battle, they must also expand to consider the future of racism.

Civil rights activists are also confronting artificial intelligence, an important state actor in the judicial system. History shows us that racism flourishes in anonymity. Just as the Klan wore robes to hide their identities, modern white supremacists use technology as a buffer to maintain discriminatory practices.

Using algorithms weakens attempts to hold individuals responsible for racial profiling. Each officer will insist they are doing their job and nothing more. Still, if they base their decision on an inherently biased process, they deny culpability while still causing harm.

Many facial recognition algorithms falsely identified African American and Asian faces 10 to 100 times more than Caucasian faces (Staff, 2019).

Facial recognition software reflects discrimination in the American system. It feigns credulity that law enforcement is unaware of the algorithms they use. The statistics reveal that when AI makes mistakes, it disproportionately affects Black and Asian people. Even in a best-case scenario, this system is faulty, and at its worst, blatantly biased.

Police departments use facial-recognition software to identify potential suspects; this discriminatory software increases false arrests and false imprisonment. America must engage in criminal justice reform that confronts systemic inequities as opposed to perpetuating them. When they settle for making any arrest as opposed to the correct arrest, they side with injustice.

In the 1760s, the English judge William Blackstone wrote, ‘It is better that ten guilty persons escape than that one innocent suffer (Hao & Stray, 2019).

While we are not beholden to a British interpretation of justice, it is worth considering how the American justice system can justify incarcerating many innocent Black people in the effort to find guilty ones. When law enforcement procedures reinforce bias, they cannot govern by consent. We should elevate the system by scrubbing it of all discriminatory practices. Using algorithms that science proved is discriminatory reveals an inclination to lean into racism rather than away from it.

Big Tech companies produced the smartphone, placing the world at our fingertips. While these companies increased connectivity for people worldwide, they also amassed power in a few companies. While these companies remain in the private sector, they have far-reaching influences on government agencies.

Congressman Bennie Thompson, chairman of the US House Committee on Homeland Security, said the findings of bias was worse than feared, at a time when customs officials are adding facial recognition software to travel checkpoints (Staff, 2019).

Black people and people of color experience intense scrutiny while traveling. Customs officers often disproportionately question their nationality and credibility. When law enforcement uses facial recognition software at travel checkpoints, they increase the chances of falsely identifying Black and Asian people.

This process will further discriminate against people of color while traveling, which is already an arduous process. Even as the latest research reveals the software’s discriminatory nature, the government continues to rely on it. In their attempt to increase productivity, they cast aside the civil rights Americans worked so hard to acquire. As in many facets of society, facial recognition software negatively affects Black women more than any other group.

The study also found the Microsoft Corp had almost 10 times more false positives for women of colour than men of colour in some instances during a one-to-many test (Staff, 2019).

Facial recognition software now plays an essential role in the criminal justice system. While Americans typically celebrate the technological breakthroughs of Big tech, overt bias should give us pause.

Throughout American history, racists disguised themselves to avoid public condemnation. For example, the Klan wore robes to conceal their identities. As Ida B. Wells illustrated in her anti-lynching writings, these men would often avoid taking credit for their heinous acts. Now that the robes are off, modern racists attempt to skew the line between fact and fiction. Their strategy is to deny that racism exists. While they claim their actions are not racist, they support policies that hurt and disenfranchise marginalized groups. Black people are often at the losing end of this dynamic.

For example, President Trump issued an executive order to halt anti-discrimination training. In doing so, the federal government disregarded Black citizens who experience racism and bigotry in every corner of American life.

The executive order is really the first shot in a long war against critical race theory (Fuchs, 2020).

The Trump administration’s inability to safeguard against discrimination directly violates The Civil Rights Act of 1964, which prohibits discrimination based on race, color, sex, religion, or nation of origin. Trump’s repudiation of anti-discrimination training rests on the premise that the federal government should not acknowledge discrimination. How can someone sue for discrimination when the government does not recognize prejudice?

Too many Americans think of their civil rights as guaranteed protections, which could not be further from the truth. Only through the active attempts to safeguard rights can they find shelter from tyranny.

The war has now begun. Civil rights groups, including the NAACP Legal Defense and Educational Fund, are considering litigation, and Washington’s business lobbies are fighting back, (Fuchs, 2020).

Every American should be able to live free from discriminatory practices. Civil Rights organizations clapped back at this flagrant degradation of civil liberties. Unfortunately, we are moving backward when we need forward-thinking measures to confront issues like automation, climate change, healthcare affordability, criminal justice reform, and income inequality. In that respect, civil rights activists are like salmon swimming upstream. They must oppose injustice perpetrated by the criminal justice system while working within the legal framework. Racial discrimination infiltrated our technological bubble, codifying bigotry, and jeopardizing our civil liberties.

Careful consideration must be given to legal and ethical issues that will certainly arise during the course of advancing expert system technology, (Smith, McGuire, Huang, & Yang, 2006).

While most Americans agree that we should consider the ethical issues deriving from technological breakthroughs, Big tech companies like Microsoft and Amazon seem reluctant to address the truth about their products — they are racist and unfit for the market. With enough public pressure, these companies would have to modify their facial recognition software or accept defeat in the criminal justice market.

Stop and Frisk Policies and Facial Recognition Software Lead to Racial Profiling

Stop and Frisk policies around the country showed that officers disproportionately stop Black and Hispanic people. Out of those the New York officers frisked, 9 out of 10 were innocent.

An analysis by the NYCLU revealed that innocent New Yorkers have been subjected to police stops and street interrogations more than 5 million times since 2002, and that Black and Latinx communities continue to be the overwhelming target of these tactics (NYCLU, 2020).

After the lawsuit, officers changed their behavior dramatically. They stopped fewer people, showing the power of advocacy. Advocates must apply the same level of advocacy they did during the stop and frisk fight. Furthermore, this fight must become nationalized.

Because facial recognition software fails to provide accurate results, using it is a textbook example of racial profiling. Its inability to identify Black and Asian faces equally to white faces shows these tools are insufficient.

Photo Credit | NYCLU

If you gave a child a moon-ring and told them it provided an accurate representation of their feelings, they would believe you. However, officers are not children. They have a tremendous amount of power and thus should have equal responsibility for using these tools responsibly.

Facial Recognition Software is Discriminatory

The ACLU has called out companies like Amazon, with its Rekognition tool that’s been rolled out to police departments across the US, as a threat to civil liberties. Its controversial Rekognition was found to have misidentified 28 members of Congress as people who have been arrested for crimes, raising new concerns for racial profiling and potential law enforcement abuse. The false matches disproportionately involved members of Congress who are people of color, (Hale, 2019).

All facial recognition software is inherently biased. There are several reasons this is the case. It may be in part because a majority of white people developed the programming. Perhaps the system is not exposed to enough Black people to teach it how to identify African, Asian, and Indigenous features. However, it is ultimately their responsibility to ensure their product works as described when they put it on the market.

Photo Credit | Forbes

Still, without its use by law enforcement, it would be harmlessly inaccurate. In the hands of law enforcement, facial recognition software, like Amazon’s Rekognition, is discriminatory. If it could easily flag Congressmen and women’s faces as criminals, the program would also produce high inaccuracies in the general population. The software disproportionately affected congress members of color.

While I cannot assume that programmers intended to create racist facial recognition software, they showed negligence in selling the product to law enforcement despite the discriminatory patterns. Developing facial recognition software is no easy feat. It is disappointing that the scientists working for Amazon failed to discern the ethical consequences of their product. Issues like privacy and discrimination appear like minor glitches in the world of Big Tech.

The Rise of AI

In 1956, the American Scientist John McCarthy coined the term artificial intelligence. While it initially appeared like something from a science fiction film, it revolutionized the computer’s potential. The future would no longer rely only on humans’ input, but also on computers’ independent analysis.

Because a computer is not human, there is a tendency to think of it as an inherently cool and calm, reliable asset. However, AI did not fall out of the sky. People developed it, and thus, it has the same inherent flaws and biases that characterize humanity. A system can only become as useful as the architect designed it to be.

Photo Credit | Northeastern University | Gallup Survey

Research shows that Americans already have artificial intelligence in their homes. They may use navigation apps, video or music streaming apps, digital personal assistants on their smartphones, ride share apps, intelligent home personal assistants, or smart home devices. The bottom line is that most of us have already accepted artificial intelligence into our homes. Without actively trying, we developed a trust level for this technology, making it a dangerous tool in the criminal justice system.

Sentencing Algorithms Perpetuates Systemic Racism

“Timnit: I help machines understand imagery and text. Just like a human, if a machine tries to learn a pattern or understand something, and it is trained on input that’s been provided for it to do just that, the input, or data in this case, has societal bias,” (Åkerman, 2020).

Judges around the country are using sentencing algorithms to determine the likelihood of recidivism. Unfortunately, this system perpetuates systematic racism. Because Black people are disproportionately stopped, arrested, and incarcerated, the algorithm thinks it’s making a sound decision by suggesting that Black people are more likely to commit crimes. The artificial intelligence will reiterate the patterns within the society. Until the system stopped incarcerated Black people disproportionately, the same results will occur no matter who designs the software.

Sentencing Algorithms Perpetuates Systemic Racism

Photo Credit | Technology Review

Predictions reflect the data used to make them — whether by algorithm or not. If black defendants are arrested at a higher rate than white defendants in the real world, they will have a higher rate of predicted arrest as well. This means they will also have higher risk scores on average, and a larger percentage of them will be labeled high-risk — both correctly and incorrectly. This is true no matter what algorithm is used, (Hao & Stray, 2019).

Judges should use their discernment. The system will not become less biased by using these systems. Instead, judges should attend anti-discriminatory classes. If the attempt is to become as fair as possible, they should not object to this. While automation will make many jobs obsolete, artificial intelligence is not qualified to substitute human judgment. Unless sentencing algorithms address racial discrimination, they will only maintain, if not exasperate the problem.

Can Representation Solve the Problem?

A general challenge is that people who are the most negatively affected are often the ones whose voices are not heard. Representation is an important issue, and while there’s a lot of opportunities with ML technology in society, it’s essential to have a diverse set of people and perspectives involved when working on the development so you don’t end up enhancing a gap between different groups, (Åkerman, 2020).

Too often, Big Tech companies like Microsoft and Amazon leave Black people and people of color out of the decision-making process. We need representation in Big Tech so that the policies and decisions reflect a multicultural society. After assessing the data, I doubt that a diverse coalition of company representatives would push out the product, despite the abysmal findings. While representation would not solve the problem entirely, it would be a step in the right direction.

Looking Ahead

In its current form, facial recognition software is a disastrous tool for law enforcement. While most Americans favor increased accountability, they are relying on inadequate tools. Similarly, sentencing algorithms further perpetuate systemic racism instead of providing fairness.

Black Lives Matter advocates must fight against racial discrimination present in our system while also looking ahead. The future will contain new challenges for anti-racists. If the movement cannot address artificial intelligence’s biased use, the American caste system will remain intact. We need to give a stern eye to technological breakthroughs as opposed to always jumping for joy. If left unchecked, technology will continue to act as the silent hand of white supremacy. Under this system, people blame the software and refuse to take responsibility for racism.

Advocates should support organizations like Black Girls Code; they teach and empower Black girls to learn how to code. Ultimately, we need a representation that can only happen through investments in education and fighting discriminatory hiring practices that deprive Black people of equal opportunities in the workplace. Facial recognition is a disastrous tool in the hands of law enforcement, but only if we fail to advocate for systemic changes.

Curated Articles in Race, Equality, Women, & Beauty:

References:

Åkerman, A. (2020, May 12). Meet the Googlers working to ensure tech is for everyone. Retrieved October 09, 2020, from https://blog.google/technology/ai/googlers-leading-machine-learning-fairness/

Fuchs, H. (2020, October 13). Trump’s Attack on Diversity Training Stifles Racial Reconciliation Efforts. Retrieved October 17, 2020, from https://www.nytimes.com/2020/10/13/us/politics/trump-diversity-training-race.html

Hale, K. (2019, October 15). Amazon Pitches Shady Facial Recognition Laws. Retrieved October 09, 2020, from https://www.forbes.com/sites/korihale/2019/10/01/amazon-pitches-shady-facial-recognition-laws/

Hao, K., & Stray, J. (2019, October 17). Can you make AI fairer than a judge? Play our courtroom algorithm game. Retrieved October 08, 2020, from https://www.technologyreview.com/2019/10/17/75285/ai-fairer-than-judge-criminal-risk-assessment-algorithm/

NYCLU (Ed.). (2020, March 11). Stop-and-Frisk Data. Retrieved October 18, 2020, from https://www.nyclu.org/en/stop-and-frisk-data

Smith, C., McGuire, B., Huang, T., & Yang, G. (2006, December). The History of Artificial Intelligence. Retrieved October 17, 2020, from https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Staff, E. (2019, December 23). US government study ignites racial bias debate in facial recognition tools. Retrieved October 18, 2020, from https://eandt.theiet.org/content/articles/2019/12/us-government-study-ignites-racial-bias-debate-in-facial-recognition-tools/

Vincent, J. (2016, March 24). Twitter taught Microsoft’s AI chatbot to be a racist asshole in less than a day. Retrieved October 09, 2020, from https://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

Facial Recognition
Equality
Race
BlackLivesMatter
Algorithms
Recommended from ReadMedium