Generative AI models have a propensity for learning complex data distributions, which is why they’re great at producing human-like  and convincing images of and . But training these models requires lots of labeled data, and depending on the task at hand, the necessary corpora are sometimes in short supply.

The solution might lie in an approach proposed by researchers at Google and ETH Zurich. In a paper published on the preprint server Arxiv.org (““), they describe a “semantic extractor” that can pull out features from training data, along with methods of inferring labels for an entire training set from a

Read More At Article Source | Article Attribution