Advertisement


 

If ChatGPT Reflects Hair Bias, What Other Racial Prejudice Does it Spread?



Published: Apr 14, 2023  |  

Data analyst and public health researcher

Chatbot

Illustration by Sarameeya Aree



Intrigued by all the hype around GPT4, OpenAI’s newest chatbot release, I had to try it myself. As a young Black woman joining the academic job market, I asked the bot to give me a list of professional hairstyles for a young woman going to a job interview.  

It readily provided a list of five hairstyles: sleek low bun, French twist, side part, half-up-half-down and the classic ponytail. It then offered seemingly helpful directions to achieve the looks. For instance, “a sleek, low ponytail can be a professional and easy-to-wear option. Keep it simple and avoid adding too much volume or texture.” It then warned against “overly trendy or complicated styles” and told me to “make sure your hair is well-groomed and out of your face.” 

Now, these seem like reasonable suggestions and when I think of hairstyles typically worn in a professional environment, these come to mind. But without heated hairstyling tools or chemical altering, they would never work for a natural 3c-4a curly girl like me. 

Despite the bot’s ability to pull information from thousands of examples, it failed to mention hairstyles traditionally worn by professional Black women (locs, braids, afros, twist-outs). This might strike some as an irritating but trivial oversight, but it’s far from that. It’s an example of how cultural bias is embedded in the data that AI is trained on.

In this case, it took the form of hair bias—the unequal treatment of individuals based on their hair texture, style, or length. This bias causes real harm. 

Research from the Perception Institute found that, “On average Black and white women rated smooth-long hair as two times more professional [than a] textured-afro.” Black women are 1.5 times more likely to be sent home from work because their hair is different from their white counterparts.  

Even worse, one in five Black women feel social pressure to straighten their hair for work because our hair is 3.4 times more likely to be perceived as “unprofessional.” Unfortunately, hair bias can also be found within elementary school policies, TSA screening protocols and even military guidelines, all of which give the chatbot plenty of examples and written material to draw from while also embedding bias within ChatGPT’s future responses. 

This is not just about hair bias. As a scholar who has researched the underrepresentation and marginalization of women and people of color within knowledge networks like academic journal citations, I see this as an example of the larger danger that AI presents. These chatbots are trained on large datasets that reflect deeply rooted, oppressive social-cultural values and beliefs in America. Since ChatGPT launched in November 2022, many people have noted other instances of racist responses, often in reply to inflammatory questions, like, “Who should be tortured,” or, “Which travelers present heightened security risks at the airport?” 

My hair example highlights the more subtle ways that AI systems learn and regurgitate bias. To avoid replicating—and in some cases, magnifying—oppressive structures, designers and users of AI systems must think critically and creatively about how covert forms of structural inequality may be extended through skewed datasets.

Many people assume that because ChatGPT was trained on a large and diverse dataset, which includes billions of words of text from many sources, it will provide accurate, informative responses to a wide range of questions and topics. But we social scientists know that the size of a body of work does not mean a fair or unbiased body of work.

Research shows that in academic journal citations, women and people of color are underrepresented and devalued. Even in my own graduate work, studying healthcare inequalities experienced by Black women, I often find a dearth of citations, even though much research in my field is done by scholars of color. This creates a vicious circle where the absence, denial or exclusion of citations of scholars of color further perpetuates racial dominance in intellectual inquiry. 

In an interview, Kanta Dihal, an AI researcher at the University of Cambridge, stated that ChatGPT  “doesn’t have fundamental beliefs. It reproduces texts that it has found on the internet, some of which are explicitly racist, some of which implicitly, and some of which are not.”

In other words, ChatGPT learns like humans learn, by absorbing the content that constantly bombards us. I recommend that, until the vast body of human texts becomes fairer and more equal, we make ChatGPT more discerning than humans.  

Data scientists and engineers should engage with the work of critical social scientists to improve guardrails, attempting to filter out prejudicial responses while strategically creating more diverse content and decolonizing algorithms. Until that happens, even the most advanced chatbot, trained on literally trillions of parameters and capable of passing the world’s toughest exams, will fall short. 

Many experts and insiders, from Elon Musk to ethics professors, are worried about technology being smarter than humans. I fear the opposite. I believe such advanced, unregulated and heavily used technologies, riddled with covert bias,  will give new meaning to the old computer adage, “garbage in, garbage out.” Or in this case, racially biased content in, racially biased beliefs out.



Filed under:


Tags mentioned: