Algorithms help people see and correct their biases, study shows

by | May 10, 2024 | Science

Algorithms are a staple of modern life. People rely on algorithmic recommendations to wade through deep catalogs and find the best movies, routes, information, products, people and investments. Because people train algorithms on their decisions – for example, algorithms that make recommendations on e-commerce and social media sites – algorithms learn and codify human biases.Algorithmic recommendations exhibit bias toward popular choices and information that evokes outrage, such as partisan news. At a societal level, algorithmic biases perpetuate and amplify structural racial bias in the judicial system, gender bias in the people companies hire, and wealth inequality in urban development.Algorithmic bias can also be used to reduce human bias. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Science, my colleagues and I found that algorithmic bias can help people better recognize and correct biases in themselves.The bias in the mirrorIn nine experiments, Begum Celikitutan, Romain Cadario and I had research participants rate Uber drivers or Airbnb listings on their driving skill, trustworthiness or the likelihood that they would rent the listing. We gave participants relevant details, like the number of trips they’d driven, a description of the property, or a star rating. We also included an irrelevant biasing piece of information: a photograph revealed the age, gender and attractiveness of drivers, or a name that implied that listing hosts were white or Black.After participants made their ratings, we showed them one of two ratings summaries: one showing their own ratings, or one showing the ratings of an algorithm that was trained on their ratings. We told participants about the biasing feature that might have influenced these ratings; for example, that Airbnb guests are less likely to rent from hosts with distinctly African American names. We then asked them to judge how much influence the bias had on the ratings in the summaries.[embedded content]Whether participants assessed the biasing influence of race, age, gender or attractiveness, they saw more bias in ratings made by algorithms than themselves. This algorithmic mirror effect held whether participants judged the ratings of real algorithms or we showed participants their own ratings and deceptively told them that an algorithm made those ratings.Participants saw more bias in the decisions of algorithms t …

Article Attribution | Read More at Article Source

[mwai_chat context=”Let’s have a discussion about this article:nnAlgorithms are a staple of modern life. People rely on algorithmic recommendations to wade through deep catalogs and find the best movies, routes, information, products, people and investments. Because people train algorithms on their decisions – for example, algorithms that make recommendations on e-commerce and social media sites – algorithms learn and codify human biases.Algorithmic recommendations exhibit bias toward popular choices and information that evokes outrage, such as partisan news. At a societal level, algorithmic biases perpetuate and amplify structural racial bias in the judicial system, gender bias in the people companies hire, and wealth inequality in urban development.Algorithmic bias can also be used to reduce human bias. Algorithms can reveal hidden structural biases in organizations. In a paper published in the Proceedings of the National Academy of Science, my colleagues and I found that algorithmic bias can help people better recognize and correct biases in themselves.The bias in the mirrorIn nine experiments, Begum Celikitutan, Romain Cadario and I had research participants rate Uber drivers or Airbnb listings on their driving skill, trustworthiness or the likelihood that they would rent the listing. We gave participants relevant details, like the number of trips they’d driven, a description of the property, or a star rating. We also included an irrelevant biasing piece of information: a photograph revealed the age, gender and attractiveness of drivers, or a name that implied that listing hosts were white or Black.After participants made their ratings, we showed them one of two ratings summaries: one showing their own ratings, or one showing the ratings of an algorithm that was trained on their ratings. We told participants about the biasing feature that might have influenced these ratings; for example, that Airbnb guests are less likely to rent from hosts with distinctly African American names. We then asked them to judge how much influence the bias had on the ratings in the summaries.[embedded content]Whether participants assessed the biasing influence of race, age, gender or attractiveness, they saw more bias in ratings made by algorithms than themselves. This algorithmic mirror effect held whether participants judged the ratings of real algorithms or we showed participants their own ratings and deceptively told them that an algorithm made those ratings.Participants saw more bias in the decisions of algorithms t …nnDiscussion:nn” ai_name=”RocketNews AI: ” start_sentence=”Can I tell you more about this article?” text_input_placeholder=”Type ‘Yes'”]
Share This