This article is part of a VB special issue. Read the full series here: How Data Privacy Is Transforming Marketing.
We’re putting another year of exciting developments in artificial intelligence (AI) deep learning behind us – one filled with remarkable progress, controversies and, of course, disputes. As we wrap up 2022 and prepare to embrace what 2023 has in store, here are some of the most notable overarching trends that marked this year in deep learning.
1. Scale continues to be an important factor
One theme that has remained constant in deep learning over the past few years is the drive to create bigger neural networks. The availability of computer resources makes scaling neural networks possible, as well as specialized AI hardware, large datasets, and the development of scale-friendly architectures like the transformer model.
For the moment, companies are obtaining better results by scaling neural networks to larger sizes. In the past year, DeepMind announced Gopher, a 280-billion parameter large language model (LLM); Google announced Pathways Language Model (PaLM), with 540 billion parameters, and Generalist Language Model (GLaM), with up to 1.2 trillion parameters; and Microsoft and Nvidia released the Megatron-Turing NLG, a 530-billion-parameter LLM.
One of the interesting aspects of scale is emergent abilities, where larger models succeed at accomplishing tasks that were impossible with smaller ones. This phenomenon has been especially intriguing in LLMs, where models show promising results on a wider range of tasks and benchmarks as they grow in size.
Join today’s leading executives at the Low-Code/No-Code Summit virtually on November 9. Register for your free pass today.
It is worth noting, however, that some of deep learning’s fundamental problems remain unsolved, even in the largest models (more on this in a bit).
2. Unsupervised learning continues to deliver
Many successful deep learning applications require humans to label trai …