A group of Google Brain and Carnegie Mellon University researchers this week introduced XLNet, an AI model capable of outperforming Google’s cutting-edge BERT in 20 NLP tasks and achieving state-of-the-art results on 18 benchmark tasks. BERT () is Google’s language representation model for unsupervised pretraining of NLP models first introduced last fall.

XLNet achieved state-of-the-art performance in several tasks, including seven language understanding tasks, three reading comprehension tasks , and seven text classification tasks that include processing of Yelp and IMDB data sets. Text classification with XLNet saw a marked reduction of up to 16% in error rates compared to BERT. Google open-sourced BERT in the fall of 2018.

XLNet harnesses the best of autoregressive and autoencoding methods used for unsupervised pretraining through a

Read More At Article Source | Article Attribution