Generative AI language models like OpenAI’s produce impressively coherent and grammatical text, but controlling the attributes of this text — such as the topic or sentiment — requires architecture modification or tailoring to specific data. That’s why a team of scientists at Uber, Caltech, and the Hong Kong University of Science and Technology devised what they call the , which combines a pretrained language model with one or more attribute classifiers that guide novel text generation.

Preliminary results in a show that PPLM is able to a “range” of topics and sentiment styles, importantly without sacrificing fluency and while retaining flexibility that in any combination of differentiable models steers text generation.

Their research builds on that by Google and the University of Michigan late last

Read More At Article Source | Article Attribution