Transform 2021

In the subfield of machine learning known as natural language processing (NLP), robustness testing is the exception rather than the norm. That’s particularly problematic in light of work showing that many NLP models leverage spurious connections that inhibit their performance outside of specific tests. One found that 60% to 70% of answers given by NLP models were embedded somewhere in the benchmark training sets, indicating that the models were usually simply memorizing answers. Another study — a meta analysis of over 3,000 AI papers — found that metrics used to benchmark

Read More At Article Source | Article Attribution