In a published this week on the preprint server Arxiv.org, Google and the University of California, Berkely researchers demonstrate that even the best forensic classifiers — i.e., AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models. Their work follows that of a team of researchers at the University of California, San Diego, who recently demonstrated that it’s possible to fake video detectors by adversarially modifying — specifically, by injecting information into each frame — videos synthesized using existing AI generation methods.

It’s a troubling, if not necessarily new, development for organizations attempting to productize fake media detectors, particularly considering the in content online. Fake media might be used

Read More At Article Source | Article Attribution