This article is part of a VB special issue. Read the full series here: .

Deepfakes — media that takes a person in an existing image, audio recording, or video and replaces them with someone else’s likeness using AI — are multiplying quickly. That’s troubling not only because these fakes might be used to sway opinions during an election or implicate a person in a crime, but because they’ve already been abused to generate of actors and defraud a major energy .

In anticipation of this new reality, a coalition of academic institutions, tech firms, and nonprofits are developing ways to spot misleading AI-generated media. Their work suggests that detection tools are a viable short-term solution but that the deepfake arms race is just

Read More At Article Source | Article Attribution