How 2022 became the year of generative AI

by | Nov 11, 2022 | Technology

Check out the on-demand sessions from the Low-Code/No-Code Summit to learn how to successfully innovate and achieve efficiency by upskilling and scaling citizen developers. Watch now.

There has been a lot of excitement (and hype) surrounding generative AI (artificial intelligence) in 2022. Social media platforms such as Twitter and Reddit are filled with images created by generative machine learning models such as DALL-E and Stable Diffusion. Startups building products on top of generative models are attracting funding despite the market downturn. And Big Tech companies are integrating generative models into their mainstream products.

Generative AI is not new. With a few notable exceptions, most of the technologies we’re seeing today have existed for several years. However, the convergence of several trends has made it possible to productize generative models and bring them to everyday applications. The field still has many challenges to overcome, but there is little doubt that the market for generative AI is bound to grow in 2023.

Scientific improvements in generative AI

Generative AI became popular in 2014 with the advent of generative adversarial networks (GANs), a type of deep learning architecture that could create realistic images — such as faces — from noise maps. Scientists later created other variants of GANs to perform other tasks such as transferring the style of one image to another. GANs and the variational autoencoders (VAE), another deep learning architecture, later ushered in the era of deepfakes, an AI technique that modifies images and videos to swap one person’s face for another.

2017 saw the advent of the transformer, a deep learning architecture underlying large language models (LLMs) such as GPT-3, LaMDA and Gopher. The transformer is used to generate text, software code and even protein structures. A variation of the transformer, the “vision transformer,” is also used for visual tasks such as image classification. An earlier version of OpenAI’s DALL-E used the transformer to generate images from text. 

Transformers are scalable, which means their performance and accuracy improve as they are made larger and fed …

Article Attribution | Read More at Article Source

Share This