HOW DANGEROUS ARE DEEPFAKES IN REALITY?

Dissemination

It is difficult to precisely quantify the dissemination of deepfakes, especially since their number is undoubtedly steadily growing. Deeptrace, a company that offers a technological solution to detect deepfakes, attempted to give a precise estimate in their report: The State of Deepfakes: Landscape, Threats, and Impact. 7) Published in September 2019, the report estimates that the number of deepfakes almost doubled in seven months from 7,964 in December 2018 to 14,678 in July 2019. Of these deepfakes, 96% were non-consensual pornographic content that exclusively depicted the female body.

The primary victims were prominent women, for whom thousands of fake pictures can be found online. According to the Deeptrace report, the four most popular deep porn websites alone registered more than 134 million views of fake videos of female celebrities. But many private individuals are also affected by the phenomenon of revenge pornography mentioned above. The increase is driven primarily by greater accessibility to tools and services that allow deepfakes to be created without any knowledge of programming.

In 2019, there were also reports of AI-generated language clones being used for social engineering. In August, The Wall Street Journal reported 8) on the first case of AI-based voice fraud – also known as vishing (short for “voice phishing”) – at a cost of €220,000 for the German company that was targeted.

The software imitated the voice of the German manager so successfully, including his intonations and slight German accent, that his British colleague immediately complied with the caller’s urgent request to transfer the stated amount. Although this is currently an isolated incident, it seems likely that there will be more such attempts in the future.

A significant part of the media coverage of deepfakes has focused on their potential to discredit political opponents and undermine democratic processes. So far, this potential has not materialised. Although there have been technically manipulated videos of politicians such as Barack Obama, Donald Trump and Matteo Renzi, they were motivated primarily by satire or created for demonstration purposes, and their falseness was quickly disclosed.

Consequences

However, the fact that politicians have not yet used deepfakes for disinformation does not mean that deepfakes have not already influenced the political discourse. One example that received little attention in the Western media demonstrates how the simple knowledge of the existence of deepfakes can affect the political climate.

The president of Gabon, Ali Bongo, did not appear in public for months after experiencing a stroke. Unsurprisingly, rumours began spreading that the president had passed away. To quash the speculation, the president published a video in December 2018 to give his usual New Year’s speech. But the recording had the opposite effect. Many people thought that Bongo looked strange and immediately suspected that the video was fake. Shortly afterwards, the military launched a failed coup, citing the supposed deepfake as one of their motives 9).

However, subsequent forensic analysis confirmed that the recording was authentic. Ali Bongo has since recovered from his stroke and remains in office. This shows that the biggest threat posed by deepfakes isn’t the deepfakes themselves. The mere fact that such videos are technically possible raises the question: Can we still trust the authenticity of videos?

This question will cast a shadow over the 2020 US presidential elections. In the 2016 election campaign, AI-supported disinformation and manipulation, most prominently in the form of microtargeting and bots, had already begun to play a role. Deepfakes now represent another instrument in the arsenal of disinformation. Even if few or no deepfakes are actually used in the election campaign, it is likely that many politicians will gratefully accept the opportunity to shrug off real but unfavourable recordings as forgeries.

Are there any examples of positive applications of deepfakes?

“Technology is continually giving us ways to do harm and to do well; it’s amplifying both. [...] But the fact that we also have a new choice every time is a new good,”10) says Kevin Kelly, the long-standing editor-in-chief and founding member of the technology magazine Wired. Might this statement also apply to deepfakes?

The technology is especially promising for the film industry, particularly in post-production and dubbing. Why? Currently, modifying a piece of dialogue retroactively is very expensive for film studios. The actors, film crew and film set need to be rebooked. The technology behind deepfakes could allow these types of changes to be made quickly and at a fraction of the cost.

Significant improvements could also be made to film dubbing. It would become possible to adapt the lip movements of the actors to the dubbed words or synthesise their voices to adapt them to the target language, meaning that dubbing is no longer necessary.

One example of such an application is a video by David Beckham promoting a campaign against malaria. 11) He “speaks” in several languages – and his mouth appears to synchronise perfectly with the words in each case.

Education is another interesting area of application: videos of historical figures could, for example, be created to tell their story or answer questions. The project “Dimensions of History” 12) by the Shoah Foundation of the University of Southern California attracted a lot of media attention, featuring interviews and holographic recordings of 15 holocaust survivors. This travelling exhibition was displayed in various museums throughout the US and was most recently hosted by the Swedish Museum of History. Visitors to the exhibition were given the opportunity to ask the holograms questions. The speech recognition software then matched their question with a segment of the interview. With deepfake technology, this could be implemented on a larger scale, in multiple languages.

Dissemination

It is difficult to precisely quantify the dissemination of deepfakes, especially since their number is undoubtedly steadily growing. Deeptrace, a company that offers a technological solution to detect deepfakes, attempted to give a precise estimate in their report: The State of Deepfakes: Landscape, Threats, and Impact. 7) Published in September 2019, the report estimates that the number of deepfakes almost doubled in seven months from 7,964 in December 2018 to 14,678 in July 2019. Of these deepfakes, 96% were non-consensual pornographic content that exclusively depicted the female body.

The primary victims were prominent women, for whom thousands of fake pictures can be found online. According to the Deeptrace report, the four most popular deep porn websites alone registered more than 134 million views of fake videos of female celebrities. But many private individuals are also affected by the phenomenon of revenge pornography mentioned above. The increase is driven primarily by greater accessibility to tools and services that allow deepfakes to be created without any knowledge of programming.

In 2019, there were also reports of AI-generated language clones being used for social engineering. In August, The Wall Street Journal reported 8) on the first case of AI-based voice fraud – also known as vishing (short for “voice phishing”) – at a cost of €220,000 for the German company that was targeted.

The software imitated the voice of the German manager so successfully, including his intonations and slight German accent, that his British colleague immediately complied with the caller’s urgent request to transfer the stated amount. Although this is currently an isolated incident, it seems likely that there will be more such attempts in the future.

A significant part of the media coverage of deepfakes has focused on their potential to discredit political opponents and undermine democratic processes. So far, this potential has not materialised. Although there have been technically manipulated videos of politicians such as Barack Obama, Donald Trump and Matteo Renzi, they were motivated primarily by satire or created for demonstration purposes, and their falseness was quickly disclosed.

Consequences

However, the fact that politicians have not yet used deepfakes for disinformation does not mean that deepfakes have not already influenced the political discourse. One example that received little attention in the Western media demonstrates how the simple knowledge of the existence of deepfakes can affect the political climate.

The president of Gabon, Ali Bongo, did not appear in public for months after experiencing a stroke. Unsurprisingly, rumours began spreading that the president had passed away. To quash the speculation, the president published a video in December 2018 to give his usual New Year’s speech. But the recording had the opposite effect. Many people thought that Bongo looked strange and immediately suspected that the video was fake. Shortly afterwards, the military launched a failed coup, citing the supposed deepfake as one of their motives. 9)

However, subsequent forensic analysis confirmed that the recording was authentic. Ali Bongo has since recovered from his stroke and remains in office. This shows that the biggest threat posed by deepfakes isn’t the deepfakes themselves. The mere fact that such videos are technically possible raises the question: Can we still trust the authenticity of videos?

This question will cast a shadow over the 2020 US presidential elections. In the 2016 election campaign, AI-supported disinformation and manipulation, most prominently in the form of micro-targeting and bots, had already begun to play a role. Deepfakes now rep-resent another instrument in the arsenal of disinformation. Even if few or no deepfakes are actually used in the election campaign, it is likely that many politicians will gratefully accept the opportunity to shrug off real but unfavourable recordings as forgeries.

Are there any examples of positive applications of deepfakes?

“Technology is continually giving us ways to do harm and to do well; it’s amplifying both. [...] But the fact that we also have a new choice every time is a new good,” 10) says Kevin Kelly, the long-standing editor-in-chief and founding member of the technology magazine Wired. Might this statement also apply to deepfakes?

The technology is especially promising for the film industry, particularly in post-production and dubbing. Why? Currently, modifying a piece of dialogue retroactively is very expensive for film studios. The actors, film crew and film set need to be rebooked. The technology behind deepfakes could allow these types of changes to be made quickly and at a fraction of the cost.

Significant improvements could also be made to film dubbing. It would become possible to adapt the lip movements of the actors to the dubbed words or synthesise their voices to adapt them to the target language, meaning that dubbing is no longer necessary.

One example of such an application is a video by David Beckham promoting a campaign against malaria. He “speaks” in several languages – and his mouth appears to synchronise perfectly with the words in each case.

One example of such an application is a video by David Beckham promoting a campaign against malaria. 11) He “speaks” in severallanguages – and his mouth appears to synchronise perfectly with the words in each case.

Education is another interesting area of application: videos of historical figures could, for example, be created to tell their story or answer questions. The project “Dimensions of History” 12) by the Shoah Foundation of the University of Southern California attracted a lot of media attention, featuring interviews and holographic recordings of 15 holocaust survivors. This travelling exhibition was displayed in various museums throughout the US and was most recently hosted by the Swedish Museum of History. Visitors to the exhibition were given the opportunity to ask the holograms questions. The speech recognition software then matched their question with a segment of the interview. With deepfake technology, this could be implemented on a larger scale, in multiple languages.