Why Meta’s large language model does not work for researchers

by | Aug 28, 2022 | Technology

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

When Alan Turing came up with the Turing Test in 1950, it was a test of a machine’s ability to exhibit intelligent behavior indistinguishable from that of a human. Turing proposed that a computer can be said to possess artificial intelligence (AI) if it can create human-like responses to questions.

Thanks to large language models, we’re now at the point where computers can write text on just about any subject we give them — and for the most part, it’s very convincing and human-like.

Tell it to write a sentence on, “Why does Elon Musk like to knit?” and what it outputs is arguably as good as what any human could write:

Some possible reasons why Elon Musk might enjoy knitting could include the fact that it is a relaxing and meditative activity that can help to clear one’s mind, and it also allows for a great deal of creativity and self-expression.
Additionally, knitting can be a very social activity, and Elon Musk may enjoy the opportunity to chat and connect with other knitters.

[Source: OpenAI Playground using text-davinci-002 model]

Summarizing complex text

Examples like this are fun, but the bigger value proposition of using large language models is less about writing wacky prose and more about the summarization of complex text. These use cases are exciting across industries. For instance, AI can distill information about potential prospects for sales intelligence purposes, or it can summarize investment documents in finance.

MetaBeat 2022
MetaBeat will bring together thought leaders to give guidance on how metaverse technology will transform the way all industries communicate and do business o …

Article Attribution | Read More at Article Source

Share This