4 AI research trends everyone is (or will be) talking about

by | Jun 28, 2022 | Technology

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 – 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

Using AI in the real world remains challenging in many ways. Organizations are struggling to attract and retain talent, build and deploy AI models, define and apply responsible AI practices, and understand and prepare for regulatory framework compliance.

At the same time, the DeepMinds, Googles and Metas of the world are pushing ahead with their AI research. Their talent pool, experience and processes around operationalizing AI research rapidly and at scale puts them on a different level from the rest of the world, creating a de facto AI divide.

These are 4 AI research trends that the tech giants are leading on, but everyone else will be talking about and using in the near future.

Emergent abilities of large language models in AI research

One of the key talking points regarding the way forward in AI is whether scaling up can lead to substantially different qualities in models. Recent work by a group of researchers from Google Research, Stanford University, UNC Chapel Hill and DeepMind says it can.

Their research discusses what they refer to as emergent abilities of large language models (LLMs). An ability is considered to be emergent if it is not present in smaller models but is present in larger models. The thesis is that existence of such emergence implies that additional scaling could further expand the range of capabilities of language models.

The work evaluates emergent abilities in Google’s LaMDA and PaLM, OpenAI’s GPT-3 and DeepMind’s Gopher and Chinchilla. In terms of the “large” in LLMs, it is noted that today’s language models have been scaled primarily along three factors: amount of computation (in FLOPs), number of model parame …

Article Attribution | Read More at Article Source

Share This