Discover more from Seeking Veritas by The Professor, The Poet & Friends
Digital Equity Series - AI Stereotypes
Brian Sankarsingh writes about the possible connections between AI and systemic racism
Artificial Intelligence (AI) has become an integral part of our lives, impacting everything from online shopping recommendations to medical diagnoses. However, this powerful technology is not without its flaws and limitations, as it sometimes perpetuates harmful stereotypes that can further exacerbate systemic racism. One alarming example of this is the biased search results from Google, which often return images of Black women when searching for "unprofessional hairstyles." This innocuous search query serves as a stark reminder of how AI can both reflect and reinforce deep-seated prejudices. A simple search for "unprofessional hairstyles" on Google reveals a disturbing trend. Many images returned by the search are of Black women sporting various natural and braided hairstyles. This disproportionate representation not only reinforces harmful stereotypes but also perpetuates systemic racism by reinforcing the idea that Black hairstyles are "unprofessional."
AI algorithms, including those used by Google's search engine, operate on vast datasets and patterns derived from the internet. When biased content and discriminatory perceptions are ingrained within society, they often find their way into these datasets, leading to distorted and offensive search results. This disturbing example highlights the broader issue of AI systems inadvertently promoting discrimination in various aspects of life. AI algorithms learn from the data they are fed, and if this data contains bias or prejudice, the AI can perpetuate these biases in its decisions and recommendations. In the case of the "unprofessional hairstyles" search query, data bias is a crucial factor. Hairstyle-related biases are deeply rooted in society, and these biases are reinforced through media, professional dress codes, and societal expectations. For decades, Black individuals have faced discrimination based on their natural hair, often being forced to conform to Eurocentric beauty standards in professional settings. Such discrimination has led to calls for legal protections against hair-based discrimination and a broader societal conversation about embracing natural hair as professional. Nevertheless, AI algorithms, including search engines, can inadvertently perpetuate these harmful biases, as evident in the Google search results.
The link between AI stereotypes and systemic racism goes beyond search engine results. It extends to facial recognition technology, predictive policing, and even hiring algorithms. These AI systems can unintentionally perpetuate systemic racism by favoring certain racial or ethnic groups over others. For example, studies have shown that facial recognition technology can be less accurate for people with darker skin tones, leading to wrongful arrests and other injustices. Or in a time of affirmative action could negatively impact people with Anglicized names or lighter skin tones. Similarly, predictive policing algorithms can disproportionately target minority communities, leading to racial profiling and over-policing. Hiring algorithms, if not designed carefully, can favor certain demographics, leading to unequal opportunities in the job market. All these instances reflect the larger problem of AI systems inheriting and amplifying biases that exist in the real world.
To address the link between AI stereotypes and systemic racism, several steps must be taken:
· Diverse Representation: Diverse representation in the development of AI systems is crucial. This includes diverse teams of engineers and data scientists, as well as diverse datasets that account for different races, ethnicities, and cultures.
· Algorithmic Audits: Periodic audits of AI algorithms should be conducted to identify and rectify biases. These audits can help developers understand how their algorithms are operating and make necessary adjustments.
· Ethical Guidelines: Establishing ethical guidelines for AI development and use can help ensure that AI technologies do not perpetuate harmful stereotypes or reinforce systemic racism.
· Transparency and Public Awareness: Raising public awareness about the potential biases in AI systems can lead to increased scrutiny and accountability for companies and organizations that deploy AI technology.
The link between AI stereotypes and systemic racism is an issue that demands attention. The example of biased search results on Google serves as a stark reminder of how deeply embedded prejudices can persist in the digital world. It is the responsibility of developers, organizations, and society to work towards more inclusive and equitable AI systems. By addressing these biases and taking steps to mitigate them, we can move closer to a future where AI serves as a force for positive change rather than perpetuating harmful stereotypes and increasing the impact of systemic racism.
Thanks for reading Seeking Veritas by The Professor, The Poet & Friends! Subscribe for free to receive new posts and support my work.
Bio: Brian Sankarsingh is a Trinidadian-born Canadian immigrant who moved to Canada in the 1980s. He describes himself as an accidental poet, with a passion for advocacy and a penchant for prose, an unapologetic style, he offers his poetry as social and political commentary.