Diversity and Equality in Technology
Here’s How Technology Can Be Racist.
Emerging technologies, e.g., artificial intelligence, adversely affect the diversity and equality of ethnic people.

Introduction and Context
Can technology be racist? The short answer is YES. And technologists can be racist too! Unfortunately, it can affect vulnerable people’s health and well-being.
Therefore, as a technologist and health advocate, I create awareness of this important societal issue.
Racism is a global pandemic. It is ubiquitous, has historical roots, and is a political tool affecting millions of innocent people who fall into the minority category.
Unfortunately, emerging technology stacks — particularly artificial intelligence (AI), big data analytics, deep learning (DL), neural networks, natural language processing (NLP), and machine learning (ML) fields — are also part of this undesirable situation.
There are two primary racism types. The first one is racism performed by individuals and communities. The second one is systemic racism, which adversely impacts minorities and is more challenging.
Systemic racism, also known as institutional racism, is embedded in the laws and regulations of organizations, states, and countries. Systemic racism can impact critical societal rights such as employment, education, and healthcare through discrimination.
You may think that what has racism got to do with technology? But it has a lot. So my purpose is to provide valuable insights into the role of technology in racism by sharing an overview of my research in the field.
Face recognition by artificial intelligence tools indicates racism. Inequity in face recognition algorithms is well documented. For example, according to an article by Alex Najibi published at Harvard University:
“Face recognition algorithms boast high classification accuracy (over 90%), but these outcomes are not universal. A growing body of research exposes divergent error rates across demographic groups, with the poorest accuracy consistently found in subjects who are female, Black, and 18–30 years old”.

Here is a definitive study at the Massachusetts Institute of Technology (MIT) validating current issues and taking action. “The Gender Shades Project pilots an intersectional approach to inclusive product testing for AI.
“Gender Shades is a preliminary excavation of inadvertent negligence that will cripple the age of automation and further exacerbate inequality if left to fester. The deeper we dig, the more remnants of bias we will find in our technology.
We cannot afford to look away this time because the stakes are too high. We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality.
Automated systems are not inherently neutral. They reflect the priorities, preferences, and prejudices — the coded gaze — of those who have the power to mold artificial intelligence”.
After this brief background, I provide a perspective on how racism and gender inequality occurs in the technology field by providing outstanding perspectives from credible and up-to-date sources and introducing its implications and impact.
A Brief Review of the Literature for Racism and Inequality in Technology

Harvard University Press published a 275-page anthology titled Racism in America. In the opening of this outstanding collection contributed by several researchers, Annette Gordon-Reed (an American historian and law professor) at Harvard University made an eye-opening statement:
Although George Floyd’s death was the spark, there was also an instantaneous recognition that the circumstances giving rise to what happened to him that day were systemic, the product of many years of thoughts, choices, and actions. People also understood that history has mattered greatly to this process, particularly the tortured history of race and White supremacy (not just a matter of White and Black, but of White people and people of color, generally) that has been in place for centuries. Current policies, shaped by that history, must be subjected to scrutiny and critiqued. Plans for the future, based upon new understandings about how to achieve a more racially just society, must also be formulated.
This remarkable anthology covers a combination of history, economy, political sciences, cultural commentaries, and biographies addressing many issues in America.
However, these points also relate to other countries. These contributors persuasively show that “the worldwide system of slavery, the ambition for empire that disrupted the lives of indigenous people on several continents, and the powerful legacies from those events have fueled BLM (Black Lives Matter) moment and the current desire for a reckoning”. You can read this comprehensive anthology free at this link.
Systemic racism within the technology context, in theory, is researched by psychologists, philosophers, political scientists, and technologists.
For example, in 2005, Derek Hook published a paper titled “Affecting whiteness: racism as a technology of effect” in the International Journal of Critical Psychology. You can read the paper free at this link.
In this paper, two effects of racism are introduced: the strategic incitements of political rhetoric and a shifting constellation of identifications (such as that of ‘whiteness.
The author concludes that “unless we can grapple with the vicissitudes of such modes of effective formation, with how these modes come to be operationalized as technological elements of broader procedures of governmental logic, we fail to appreciate the tenacity and slipperiness of ‘whiteness’ in our postimperial era.
Outstanding critical research in material culture was published in a book titled “Technology and the Logic of American Racism: A Cultural History of the Body as Evidence”by Sarah E. Chinn in 2000.
Chinn examined several social case studies. She touched on the American Red Cross’ decision to segregate the blood of black and white donors during World War 2 and discussed its ramifications for American culture. She mentioned fingerprinting, blood tests, DNA tests and gave a trial of O.J. Simpson as a racist nature of criminology.
Many academicians and thought leaders reviewed this book. For example, Priscilla Wald from Duke University said:
“Technology and The Logic of American Racism are important not only for its analysis of racism in the US but also for its exploration of the relationships among the languages of science, law, literature, and popular journalism. Chinn’s work shows that students of the humanities have a significant contribution to make to the study of the impact of historical and contemporary scientific developments on the shape of US culture”.
The Choice magazine commented on the importance of this book:
“Chinn’s study goes far beyond these examples, providing some of the clearest thinking available on the relationship between bodies and culture. The argument is never reductive. With impressive grace, the author manages both to reveal how bodies have been made to testify and to be conscious of ‘the gingerliness, respect, strength, edginess, and tenderness with which we should approach our bodies and the bodies of others, whether in words, concepts, or touch highly recommended for all academic collections.”
Another interesting study was published in the Journal of Technology Education titled “Perceptions About the Role of Race in the Job Acquisition Process: At the Nexus of Attributional Ambiguity and Aversive Racism in Technology and Engineering Education.”
Yolanda Flores Niemann, a Professor of Psychology, and researcher Nydia C. Sánchez from the Department of Counseling and Higher Education at the University of North Texas in 2015 authored this paper.
This study explored the role of race in the negative job acquisition outcomes of African American graduates of a federally funded multi-institution doctoral training program. You can read the paper for free at this link.
International Neuroethics Society (INS) offers annual meetings covering the impact and implications of technology for racism.
In 2020, a keynote speaker at this 2020 annual meeting “delivered a riveting explanation of how racism is deeply embedded in many technologies, from widely used apps to complex algorithms, that are presumed to be neutral or even beneficial but often heighten discrimination against Black people and other marginalized groups”.
A few highlights from the INS meeting give us interesting perspectives. First, sociologist Dr. Ruha Benjamin described problems of racism embedded in our processes of building and using technologies.
For example, Dr Benjamin cited a horrific example that came to light in a newspaper report in 2015. The North Miami police department used mug shots of Black male criminals for target practice, a previously hidden instance of anti-Black sentiments that still distort policing.
Two academic studies cited by Benjamin showed how difficult it is to root out ingrained prejudices. Researchers at the Yale School of Education told a group of preschool teachers to watch video clips of children in a classroom and told them to look for signs of challenging behavior, the kind that might get kids tossed out of school or the classroom. Eye-tracking technology showed that the teachers spent more time looking at Black boys than at white children.
In 2014, Stanford University researchers found that “when white people were shown statistics about the vastly disproportionate number of Black people in prison, they did not become supportive of criminal justice reform to relieve injustices against Black people but instead became more supportive of punitive policies, such as California’s Three Strikes Law and New York City’s stop-and-frisk policy, that was partly if not mainly responsible for the disproportionate incarceration rates.”
In a paper titled “Advancing Racial Literacy in Tech” by Dr. Jessie Daniels, Mutale Nkonde and Dr Darakhshan Mir who articulated why ethics, diversity in hiring, and implicit bias training are not enough to establish racial literacy in the technology workplaces.
They highlight that “racial literacy is a new method for addressing the racially disparate impacts of technology. It is a skill that can be developed, a capacity that can be expanded”. You can download the paper free at this link.
Miriam Tager (Professor in the Education department at Westfield State University” published a research book called “ Technology Segregation: Disrupting Racist Frameworks in Early Childhood Education”. This study challenges the racist framework and reveals disruptions and strategies to counter deficit discourse based on white supremacy.
This research study by Miriam covers two qualitative studies in the Northeast. It reveals that school segregation and technology segregation are the same.
Utilizing critical race theory as the theoretical framework, this research finds that young Black children are denied technological access, directly affecting their learning trajectories. This book defines the problem of technology segregation in terms of policy, racial hierarchies, funding, residential segregation, and the digital divide.
An article on Vice highlighted that “Significant Racial Bias’ Found in National Healthcare Algorithm Affecting Millions of People”. This article includes a series of studies arguing that by focusing on costs as a proxy for health, risk algorithms ignore racial inequalities in healthcare access.
A research study titled “Dirty Data, Bad Predictions: How Civil Rights Violations Impact Police Data, Predictive Policing Systems, and Justice” was conducted by Rashida Richardson (Northeastern University School of Law), Jason Schultz (New York University School of Law) and Kate Crawford (AI Now Institute; Microsoft Research).
In this definitive research study, they analyzed 13 jurisdictions in the US that have used or developed predictive policing tools while under government commission investigations or federal court-monitored settlements, consent decrees, or memoranda of an agreement stemming from corrupt, racially-biased, or otherwise illegal policing practices.
In particular, they examined the link between unlawful and biased police practices and the data available to train or implement these systems.
Dr Steven Hyman (a distinguished service professor at Harvard, Director of the Stanley Center for Psychiatric Research [Stem Cell Institute], and board chairman of the Dana Foundation) noted that “the topic of algorithmic bias is starting to emerge as an extraordinary challenge in healthcare and medicine”.
How Can Technology Be Racist?
During my ethnographic research in studying diversity and equality in technology workplaces, I found much evidence of racism in the use of technology tools and racist behavior by technology professionals.
I shared my findings in several academic papers in the 1990s. I also shared one of my research reports on cultural diversity in IT workplaces in an article on this platform.
My studies demonstrated that racism in technology fields existed. While the situation was subtle in many companies, it was also possible to see overt cases in some organizations.
In addition, I witnessed very harsh criticisms against members of some ethnic groups. The most affected groups were Indians, Africans, and Asians, particularly contractors coming from China, Hong Kong, Taiwan, Thailand, Indonesia, and Vietnam.
In addition to the technology itself, I also witnessed racist technologists in the workplace. I documented them in comprehensive ethnographic case study reports.
Let me share a sample. Some Asian colleagues were called condensing names and even execrated words. For example, I will never forget when one of my colleagues made an error, and a Caucasian supervisor said:
“You bloody Chinese people have no idea configuring a damn computer system. Why don’t you go to your darn country, create fake products and get rich!”.
The incident was reported to the organization’s governance and compliance committee. The organization was firmly focused on diversity as almost 90% of employees had an ethnic background. The supervisor faced disciplinary action. However, the incident left an awful impression on these employees and shattered the trust in the team.
There were many such examples during my observations. I also heard about similar incidents from colleagues in other organizations, technical community members, and friends on social media.
There are numerous mentions of technology being racist in the literature. They cover specific, systemic, structural, and institutional elements of racism in the technology landscape. I briefly covered some of them in the literature section above at a high level.
Typical racism situations related to technology come under two categories. The first one is the actual use of technologies to discriminate against people, and the second one is the unconscious bias of individual technology professionals at technology companies.
From linguists’ perspectives, the use of technical words reflects racist tendencies. For example, the words “master and slave” in cluster technologies reflect such inclination.
Machine learning has the potential to create racist outcomes. For example, large data sets used for training machine learning algorithms are derived from biased data with hidden racist elements.
Machine learning algorithms do not pick up these unintended biases in datasets, and they reflect the data as expected. Diversity and equality officers cannot detect anomalies and noncompliance elements.
As mentioned by Dr Benjamin, (in how racism is coded into technology), “machine learning programs use predictive police technologies to pinpoint where street crime is apt to occur usually in lower-income Black neighborhoods, which are more heavily policed to start with.”
Recently, Stanford University conducted a seminar and shared the presentation on YouTube titled “What is Anti-Racist Technology?”. This presentation provides valuable examples from several angles.
Therefore, I have attached the seminar for your review.