AI researchers from MIT, Intel, and Canadian AI initiative CIFAR have found high levels of stereotypical bias from some of the most popular pretrained models like and , , and . The analysis was performed as part of the launch of , a data set, challenge, leaderboard, and set of metrics for evaluating racism, sexism, and stereotypes related to religion and profession in pretrained language models.

The authors believe their work is the first large-scale study to show stereotypes in pretrained language models beyond gender bias. BERT is generally known as one of the top performing language models in recent years, while GPT-2, RoBERTa, and XLNet each claimed top spots on the last year. Half of the GLUE leaderboard top 10 today including RoBERTa are

Read More At Article Source | Article Attribution