HOW CAN WE FACE THE CHALLENGES ASSOCIATED WITH DEEPFAKES?
These positive examples are of course not intended to minimise the potential dangers posed by deepfakes. The risks are undisputed and require decisive countermeasures – on this, there is a consensus. But there is less agreement on the exact nature of these countermeasures. Also, the question arises of how to guarantee the rights of individuals to freedom of expression without undermining society’s need for a reliable information system.
Technological solutions for identifying and combating deepfakes
One approach to combating counterfeiting is to develop technologies that are capable of distinguishing between fake content and real content. This approach uses algorithms similar to those which generated the fakes in the first place. Using GLTR, a model based on the GPT-2 system mentioned above, researchers from the MIT-IBM Watson AI Lab and HarvardNLP investigated whether the same technology used to write independently fabricated articles can be used to recognise text passages that were generated by AI. When a text passage is generated in the test application, its words are highlighted in green, yellow, red or purple to indicate their predictability, in decreasing order. The higher the proportion of words with low predictability, namely sections marked in red and purple, the greater the likelihood that the passage was written by a human author. The more predictable the words (and the “greener” the text), the more likely the text was automatically generated. Similar techniques could be used to expose manipulated videos. In 2018, researchers observed that the actors in deepfake videos didn’t blink. This was because the static images used to generate the videos primarily showed people whose eyes were open. But the usefulness of this observation was short-lived. As soon as this information became public, videos began to appear with blinking people. A similar trend can be expected for any other identification mechanisms discovered in the future. This cat-and-mouse game has been underway in the cybersecurity field for decades – progress always benefits both sides.
But this doesn’t mean that efforts to identify deepfakes should be discontinued. In September 2019, Facebook – in collaboration with the PAI initiative 13), Microsoft and several universities – announced a “Deepfake Detection Challenge“ 14) endowed with a $10 million prize.
Facebook also commissioned a dataset with images and videos by actors specifically recorded for this purpose, so that the challenge would have adequate data to work with. A few weeks later, Google also released a dataset containing 3,000 manipulated videos with the same goal. The US research funding agency DARPA has also been working on recognising manipulated content as part of the MediFor programme (short for Media Forensics) since 2016, investing more than $68 million over two years.15) Little information is available on whether – and if so what type of – technical solutions to combat deepfakes are being developed in Germany and Europe. Most measures are being undertaken by individual companies, such as Deeptrace mentioned above, as well as research projects like Face2Face by Matthias Nießner 16), a professor at the Technical University of Munich. According to the response of the German government to a parliamentary question submitted by the FDP parliamentary group, the “National Research Centre for Applied Cybersecurity” CRISP/ATHENE is currently working on this issue with the Technical University of Munich and the Fraunhofer Institute.
In addition, the German international broadcaster Deutsche Welle (DW), the Fraunhofer Institute for Digital Media Technology (IDMT) and the Athens Technology Centre (ATC) have initiated the joint research project “Digger“. The goal of this project is to expand the web-based verification platform “Truly Media” by DW and the ATC with audio forensic technology by the Fraunhofer IDMT, among other things, to offer assistance to journalists.17) However, this response does not suggest any concrete strategy nor intentions of investing in this topic by the federal government.
Self-regulation attempts by social media platforms
Although big tech companies have contributed data and financial resources towards a technological solution to this problem, calls for Facebook and similar companies to take additional measures have been intensifying, since their platforms are key in the spread of disinformation. In response, Twitter and Facebook released statements about their plans to address deepfakes in late 2019 and early 2020, respectively.
In November 2019, Twitter asked its users for feedback on a “policy proposal for synthetic and manipulated media”. Guidelines were then announced at the beginning of February 2020: any photo, audio or video that has been “significantly altered or falsified” with the goal of misleading people would be removed if Twitter believes that it may cause serious harm – for example by endangering the physical security of individuals or prompting “widespread civil unrest”. If not, the tweets may still be labelled as manipulated media, showing a warning when the content is shared, and deprioritising the content in user feeds. These changes are to take effect on 5 March, 2020.18)
“Little information is available on whether – and if so what type of – technical solutions to combat deepfakes are being developed in Germany and Europe.“
Facebook is going one step further. On 6 January, 2020, Monika Bickert, Facebook’s Vice President of Global Policy Management, announced in a blog post that deepfakes meeting certain criteria would henceforth be deleted from the platform.19) According to the blog post, any content modified or synthesised using AI in such a way that it appears authentic to the average person would be deleted. However, satirical content is excluded from these guidelines, which leaves significant room for interpretation.
Interestingly, the guidelines do not apply to cheapfakes; they explicitly and exclusively target AI-generated content. Accordingly, the fake video of Nancy Pelosi mentioned earlier continues to be available on Facebook.20)
Although Facebook admitted that its fact-checkers had flagged the video as fake, it declined to delete it because the company “does not enforce a policy that requires information posted on Facebook to be truthful”.21)
This approach reflects Facebook’s position on freedom of expression and goes beyond the issue of deepfakes. In the debate on political advertising, Rob Leathern, the Director of Product Management at Facebook, wrote in a blog post in January 2020 that these types of decision should not be made by private companies, “which is why we advocate regulation that applies to the entire industry. In the absence of regulation, Facebook and other companies are free to choose their own policies”.
It is certainly worth discussing whether Facebook’s interpretation of freedom of expression has merit from an ethical perspective. However, Rob Leathern’s statement draws attention to a specific question – namely the lack of, or at least incompleteness of, regulation.
Regulation attempts by legislators
In Germany, deepfakes fall under “general and abstract rules” according to the response by the federal government to the brief parliamentary enquiry submitted by the FDP parliamentary group, as mentioned above. “There are no specific regulations at the federal level that exclusively cover deepfake applications or were created for such applications. The federal government is constantly reviewing the legal framework at the federal level to determine whether any adjustment is necessary to address technological or social challenges.”
This means that some partial aspects of the deepfake issue, including revenge pornography, are supposedly implicitly covered by existing laws, but there is in fact no explicit approach to handling manipulated content. This applies to the entire spectrum of disinformation in digital space, not just the special case of “deepfakes”. As noted by the author of the study “Regulatory responses to disinformation” 22) from Stiftung Neue Verantwortung: “previous attempts at regulation and political solutions [in Germany and Europe] are hardly suitable to curb disinformation.”
A study by the law firm WilmerHale, “Deepfake Legislation: A Nationwide Survey”,23) gives a detailed analysis of the status of deepfake regulation in the US.
In the United States, explicit pieces of legislation on deepfakes have already been written into criminal law – for example in Virginia, where non-consensual deepfake pornography is punishable, and in Texas, where any deepfakes intended to influence voters are punishable. Similar legislation was also passed in California in September 2019.
Possibly the most in-depth regulation of deepfakes was undertaken by the Chinese legislators in late 2019. Chinese law requires the providers and users of online video messaging and audio information services to clearly mark all content that was created or modified using new technologies such as artificial intelligence.
Although it is certainly worth considering whether similar regulations could also be adopted by other countries, the case of China leaves a bad aftertaste: the Chinese government itself uses technology-based disinformation to target protesters in Hong Kong, among other things, and it seems inevitable that these new regulations will be used as a pretext for further censorship.
Building up media competence
Effectively regulating new technological phenomena is certainly not easy. It has often proved difficult in the past. To drive a car in 19th century England, for example, a second person was required to walk in front of the vehicle waving a red flag under the Locomotive Act of 1865. 24) Nevertheless, there are measures that legislators can already take to counteract the phenomenon of deepfakes. Since 96% of deepfakes are currently non-consensual pornography, it would be a good start to explicitly make this punishable, as has been done in Virginia and California. Regulating defamation, fraud and privacy rights can be handled similarly. Furthermore, legislators should create clear guidelines for digital platforms to handle deepfakes in particular and disinformation in general in a uniform manner.
These measures can range from labelling deepfakes as such and limiting their distribution (excluding them from recommendation algorithms) to deleting them. Promoting media literacy should also be made a priority for all citizens, regardless of age. An adequate understanding of how deepfakes are created and disseminated should enable citizens to recognise disinformation and avoid being misled.
The responsibility of the individual: critical thinking and media literacy
Critical thinking and media literacy are the basis for a differentiated approach to disinformation. It is certainly not possible and likely not desirable to ask every single person to question everything they see.
But more than ever before, people would be well advised to consume online content with caution. The simplest thing that anyone can do if an image, video or text seems suspicious is a Google search. Often, this will quickly unmask manipulated content, since the details of the manipulation circulate just as quickly as the content itself.
This is especially important for users who wish to share the content by “liking it” or commenting on it. We can also pay more attention to whether the blinking, facial expressions or speech in a video appear unnatural, whether parts of an image are blurred, or whether objects seem out of place.
However, these clues will quickly disappear as deepfake technology advances. In the future, there could conceivably be browser add-ons that automatically identify manipulated content and notify users, similar to an ad blocker. But this requires us to be aware of the possibility of manipulated content in the first place. To raise this kind of awareness among its citizens, Finland, the country that was ranked the highest in a study measuring resilience to disinformation,25) offers educational opportunities to its entire population – from kindergarten to retirement age.