Increasing prevalence of deepfakes concerns security experts
Deepfakes are democratising CGI processes, with often humorous results. But these Artificial Intelligence-generated videos have the potential to be weaponised in ways that could threaten national security.
Deepfakes are synthetic media created by Artificial Intelligence: they make it look as if real people said and did things they never actually did. 'For Deepfake you start with the source data, and by source data I mean pictures and videos of the character you want to deepfake. In my case it's Tom Cruise. Around 6,700 stored images of all his angles, of all his expressions', says Chris Ume, whose deepfakes have gone viral. Though the emerging technology can make amusing videos, security experts are concerned that the technology has a sinister side. 'In the national security context, there’s no end to the nightmare scenario. ... [For example] a deepfake of the president, say, announcing a missile strike against North Korea', says one former CIA officer. Beyond the speculation about national security, personal attacks - such as fake revenge porn - are also a risk. 'Deepfake porn is still, without a doubt, the most pernicious case of malicious deepfakes. But rather than this being a tawdry woman's issue, I see this as a harbinger of what's to come', says tech and politics expert Nina Schick.FULL SYNOPSIS