Connect with us

Canada News

AI bias must move to accountability to address inequity

Published

on

Webpage of ChatGPT, a prototype AI chatbot, is seen on the website of OpenAI, on a smartphone. Examples, capabilities, and limitations are shown.

Policies should require ChatGPT and other AI to assess equity impacts and risks, and demand transparency about design and data disparities. (Pexels Photo)

ChatGPT was hugely and instantly popular when it was released late last year and people have readily adopted it in their workplaces, at school and at home. Whether it’s producing computer coding, emails, schedules, travel itineraries or creative writing, its uses seem endless. 

But like other large-language models that use artificial intelligence algorithms to produce text, image, audio and video content, such technology is not separate from our social and political realities. Researchers have already found that ChatGPT and other generative and traditional artificial intelligence (AI) can reinforce inequality by reproducing bias against and stereotypes of marginalized groups. 

Well-founded concerns about the potential of AI to perpetuate inequality have been expressed for years, and grow ever more relevant as it becomes an increasing part of our lives. Researchers and advocates have suggested there should be policies on AI that put fairness and accountability first.  

AI has immense potential. It can improve our productivity and also our predictions and decisions, which in turn can help reduce disparities. We all have unconscious biases that influence our choices and actions. It can be hard to understand how we’ve arrived at them and whether biases have played a role. Because AI is programmed and can be audited and changed, it can theoretically help us be more accurate and fairer.  

For example, researchers have explored how AI can help make processing refugee claims fairer or more accurately diagnose diseases. And a recent study shows that consultants using a later version of Chat (GPT-4) outperformed consultants who did not. Those with skills deficits were particularly likely to benefit, potentially levelling the playing field for folks without access to elite training. 

But AI is built with data that has been generated from databases of existing information, such as images, texts, and historical data, so our biases become built-in. Its effects on marginalized groups are often unrecognized even as they are perpetuated, because the technology appears to be objective and neutral. Our report on AI research outlines what scholars have found about how AI can contribute to inequity and what can be done to mitigate it.  

Data used in AI development plays a key role. ChatGPT has been trained on numerous text databases, including Wikipedia. A slew of recent articles and research has shown that the chatbot can – without intention on the part of programmers – reproduce sexist and racist stereotypes. For example, it associated boys with science and technology and girls with creativity and emotion. It suggested a “good scientist” is a white man, and readily produced racist content. In translations from Bengali and other languages with gender-neutral pronouns, ChatGPT changed these pronouns to gendered ones.  

These instances show how groups who are underrepresented, misrepresented or omitted from data will continue to be marginalized by AI trained on that material. Research shows that using more representative data can substantially reduce bias in outcomes. But in complex situations – particularly in the case of generative AI – programmers may not be able to explain how specific outputs are being reached, so it may be difficult to audit and fix them.   

Products using AI can also be designed and used in ways that further reinforce inequity. Familiar examples are Amazon’s Alexa and Apple’s Siri, which are named and gendered as women. Researchers have discussed how these AI-powered digital assistants appear to be innovative helpers, but at the same time embody gender stereotypes about women in the home. 

Profit motives may also lead companies to use AI to reproduce sexism and racism. Researcher Safiya Noble explored how Google searches for the term “Black girls” led to first-page results that sexually objectified Black girls and women because they were produced by an algorithm whose primary objective is to drive advertising. 

The consequences are grave. ChatGPT’s ability to perpetuate stereotypes may seem trivial, but repetition of biases and marginalization reinforces their existence. Unequal gender roles are created through repetition of gender norms and expectations. The same may be said of racial stereotypes. AI that is not built equitably can interfere with people’s ability to live safely and free of discrimination.  

In 2018, researchers Joy Buolamwini and Timnit Gebru demonstrated how facial recognition technology is less effective on dark skin tones because it was trained on a limited database of images. This can lead to misidentification and dangerous consequences for racialized people, as revealed by the New York Times in its reporting on the wrongful arrest of Black men. The pervasiveness of AI, combined with a lack of understanding about how it works and how to make it fair, can obscure the extent of its harms, making it potentially more damaging to equity than having humans work on similar tasks.  

Public policies at all levels of government can help shape how AI is created and used. They can require that impacts on (in)equity be assessed and that risks be reported before and after an AI-powered product or service is launched. They can also demand transparency about design, data and any disparities in that data. And they can prescribe that the public be informed when AI is used.  

Such policies should be developed with input from diverse communities and multidisciplinary experts, who have different knowledge of and perspectives on AI. This could ensure risks and effects would be considered at the outset rather than after harm is done – and would make developers accountable for their products.   

Series | How should artificial intelligence be regulated? 

Will artificial intelligence lead to more unfairness? 

Canada is failing to regulate AI amid fear and hype 

The European Union AI Act, which is still subject to approval, would be the first comprehensive law for AI requiring systems be classified based on risk. Some AI tools and programs would be banned for posing unacceptable risks, such as manipulating vulnerable groups. Others with lower risk levels, including generative AI such as ChatGPT, would face requirements to make data sources more transparent. 

Similar discussions are taking place in Canada through the proposed Artificial Intelligence and Data Act and the United States in its Blueprint for an AI Bill of Rights 

Some have suggested that regulation may stifle innovation. Policies may not be able to keep up with the speed at which AI is being developed. Industry standards will need to change to prioritize equity, safety and other social considerations. But innovation does not have to come at the expense of social considerations. Developing AI with the goal of reducing inequality is in itself innovative. 

A question about AI is how to train it to align with our social norms and values. Building AI that prioritizes values such as fairness would help create more useful products that better serve all people. AI developers that focus on groups who have historically been marginalized in AI design rather than address their interests retroactively will be innovative while also contributing to a fairer society.   

AI is used across every sector, and new technologies such as ChatGPT are becoming ever more integrated in our lives. At the same time, in many places, inequality is widening. Public and organizational policies that emphasize equitable and safe AI are crucial for a more just world.   

Read our recent series on AI:

The IRPP is holding a (free) webinar on artificial intelligence on October 5, at 1 P.M. ET.

Click here to register for this event, which will be held in French.

This article first appeared on Policy Options and is republished here under a Creative Commons license.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Maria in Vancouver

Lifestyle2 weeks ago

Nobody Wants This…IRL (In Real Life)

Just like everyone else who’s binged on Netflix series, “Nobody Wants This” — a romcom about a newly single rabbi...

Lifestyle3 weeks ago

Family Estrangement: Why It’s Okay

Family estrangement is the absence of a previously long-standing relationship between family members via emotional or physical distancing to the...

Lifestyle2 months ago

Becoming Your Best Version

By Matter Laurel-Zalko As a woman, I’m constantly evolving. I’m constantly changing towards my better version each year. Actually, I’m...

Lifestyle2 months ago

The True Power of Manifestation

I truly believe in the power of our imagination and that what we believe in our lives is an actual...

Maria in Vancouver3 months ago

DECORATE YOUR HOME 101

By Matte Laurel-Zalko Our home interiors are an insight into our brains and our hearts. It is our own collaboration...

Maria in Vancouver4 months ago

Guide to Planning a Wedding in 2 Months

By Matte Laurel-Zalko Are you recently engaged and find yourself in a bit of a pickle because you and your...

Maria in Vancouver4 months ago

Staying Cool and Stylish this Summer

By Matte Laurel-Zalko I couldn’t agree more when the great late Ella Fitzgerald sang “Summertime and the livin’ is easy.”...

Maria in Vancouver5 months ago

Ageing Gratefully and Joyfully

My 56th trip around the sun is just around the corner! Whew. Wow. Admittedly, I used to be afraid of...

Maria in Vancouver6 months ago

My Love Affair With Pearls

On March 18, 2023, my article, The Power of Pearls was published. In that article, I wrote about the history...

Maria in Vancouver6 months ago

7 Creative Ways to Propose!

Sometime in April 2022, my significant other gave me a heads up: he will be proposing to me on May...