Digital Equity Series - AI Stereotypes
Brian Sankarsingh writes about the possible connections between AI and systemic racism
Artificial Intelligence (AI) has become an integral part of our lives, impacting everything from online shopping recommendations to medical diagnoses. However, this powerful technology is not without its flaws and limitations, as it sometimes perpetuates harmful stereotypes that can further exacerbate systemic racism. One alarming example of this is the biased search results from Google, which often return images of Black women when searching for "unprofessional hairstyles." This innocuous search query serves as a stark reminder of how AI can both reflect and reinforce deep-seated prejudices. A simple search for "unprofessional hairstyles" on Google reveals a disturbing trend. Many images returned by the search are of Black women sporting various natural and braided hairstyles. This disproportionate representation not only reinforces harmful stereotypes but also perpetuates systemic racism by reinforcing the idea that Black hairstyles are "unprofessional."
AI algorithms, including those used by Google's search engine, operate on vast datasets and patterns derived from the internet. When biased content and discriminatory perceptions are ingrained within society, they often find their way into these datasets, leading to distorted and offensive search results. This disturbing example highlights the broader issue of AI systems inadvertently promoting discrimination in various aspects of life. AI algorithms learn from the data they are fed, and if this data contains bias or prejudice, the AI can perpetuate these biases in its decisions and recommendations. In the case of the "unprofessional hairstyles" search query, data bias is a crucial factor. Hairstyle-related biases are deeply rooted in society, and these biases are reinforced through media, professional dress codes, and societal expectations. For decades, Black individuals have faced discrimination based on their natural hair, often being forced to conform to Eurocentric beauty standards in professional settings. Such discrimination has led to calls for legal protections against hair-based discrimination and a broader societal conversation about embracing natural hair as professional. Nevertheless, AI algorithms, including search engines, can inadvertently perpetuate these harmful biases, as evident in the Google search results.
The link between AI stereotypes and systemic racism goes beyond search engine results. It extends to facial recognition technology, predictive policing, and even hiring algorithms. These AI systems can unintentionally perpetuate systemic racism by favoring certain racial or ethnic groups over others. For example, studies have shown that facial recognition technology can be less accurate for people with darker skin tones, leading to wrongful arrests and other injustices. Or in a time of affirmative action could negatively impact people with Anglicized names or lighter skin tones. Similarly, predictive policing algorithms can disproportionately target minority communities, leading to racial profiling and over-policing. Hiring algorithms, if not designed carefully, can favor certain demographics, leading to unequal opportunities in the job market. All these instances reflect the larger problem of AI systems inheriting and amplifying biases that exist in the real world.
To address the link between AI stereotypes and systemic racism, several steps must be taken:
· Diverse Representation: Diverse representation in the development of AI systems is crucial. This includes diverse teams of engineers and data scientists, as well as diverse datasets that account for different races, ethnicities, and cultures.
· Algorithmic Audits: Periodic audits of AI algorithms should be conducted to identify and rectify biases. These audits can help developers understand how their algorithms are operating and make necessary adjustments.
· Ethical Guidelines: Establishing ethical guidelines for AI development and use can help ensure that AI technologies do not perpetuate harmful stereotypes or reinforce systemic racism.
· Transparency and Public Awareness: Raising public awareness about the potential biases in AI systems can lead to increased scrutiny and accountability for companies and organizations that deploy AI technology.
The link between AI stereotypes and systemic racism is an issue that demands attention. The example of biased search results on Google serves as a stark reminder of how deeply embedded prejudices can persist in the digital world. It is the responsibility of developers, organizations, and society to work towards more inclusive and equitable AI systems. By addressing these biases and taking steps to mitigate them, we can move closer to a future where AI serves as a force for positive change rather than perpetuating harmful stereotypes and increasing the impact of systemic racism.
Bio: Brian Sankarsingh is a Trinidadian-born Canadian immigrant who moved to Canada in the 1980s. He describes himself as an accidental poet, with a passion for advocacy and a penchant for prose, an unapologetic style, he offers his poetry as social and political commentary.
Great article, appreciate the links with examples. “AI algorithms learn from the data they are fed” There’s the issue of what goes in, and what doesn’t go in. Incomplete or excluded data (whether inadvertent or deliberate) also leads to distortion and biased output.
I like the style of this Digital Equity Series — calm delivery of a clear stance, with a thorough breakdown of the “why” behind it. It makes it easy to process the content objectively, and tamps down the urge for a defensive reaction because the starting point isn’t to attack or insult, so the message doesn’t seem arrogant or accusatory. Regardless of the reader’s opinion or initial stance, it’s easy to acknowledge and agree with all the specific points that make sense.
Thank you Brian. However, I would suggest it is not just stereotypes or systemic racism that AI algorithms perpetuate through the data they collect. Specific algorithms can also predict what we prioritize when making purchases, our tolerance level for price increases before changing brands, or what it would take to make us support a different political party. By blending data from people belonging to similar demographics, algorithms can be developed to predict the choices of these larger groups, or appeal to their values or innate biases. In the hands of the less scrupulous or those protecting their interests, our data can be shared legally - as when they notify us that they have the right to share it at their discretion. Health insurers alone know how often and which members of our families see psychotherapists, oncologists, or plastic surgeons. Think of the date our debit cards alone provide to our banks. Algorithms perpetuating biases and stereotypes grew right alongside the growth of the internet. Now with the introduction of AI, all of this is growing exponentially. While I wholeheartedly support your efforts to put steps in place to address these issues, I fear it may be too late to rein it in. The cat is out of the bag.