OpenAI today it will begin tracking machine learning models that achieve state-of-the-art efficiency, an effort it believes will help identify candidates for scaling and achieving top overall performance. To kickstart things, the firm published an analysis suggesting that since 2012, the amount of compute needed to train an AI model to the same performance on classifying images in a popular benchmark — ImageNet — has been decreasing by a factor of 2 every 16 months.

Beyond spotlighting top-performing AI models, OpenAI says that publicly measuring efficiency — which here refers to reducing the compute needed to train a model to perform a specific capability — will paint a quantitative picture of algorithmic progress. It’s OpenAI’s assertion that this in turn will inform policymaking by renewing

Read More At Article Source | Article Attribution