Applications of Artificial Intelligence (AI) are playing an increasing role in our society – but the new possibilities of this technology come hand in hand with new risks. One such risk is misuse of the technology to deliberately disseminate false information. Although politically motivated dissemination of disinformation is certainly not a new phenomenon, technological progress has made the creation and distribution of manipulated content much easier and more efficient than ever before. With the use of AI algorithms, videos can now be falsified quickly and relatively cheaply (“deepfakes”) without requiring any specialised knowledge.

The discourse on this topic has primarily focused on the potential use of deepfakes in election campaigns, but this type of video only makes up a small fraction of all such manipulations: in 96% of cases, deepfakes were used to create pornographic films featuring prominent women. Women from outside of the public sphere may also find themselves as the involuntary star of this kind of manipulated video (deepfake revenge pornography). Additionally, applications such as DeepNude allow static images to be converted into deceptively real nude images. Unsurprisingly, these applications only work with images of female bodies. But visual content is not the only type of content that can be manipulated or produced algorithmically. AI-generated voices have already been successfully used to conduct fraud, resulting in high financial damages, and GPT-2 can generate texts that invent arbitrary facts and citations.

What is the best way to tackle these challenges?

Companies and research institutes have already invested heavily in technological solutions to identify AI-generated videos. The benefit of these investments is typically short-lived: deepfake developers respond to technological identification solutions with more sophisticated methods – a classical example of an arms race. For this reason, platforms that distribute manipulated content must be held more accountable. Facebook and Twitter have now self-imposed rules for handling manipulated content, but these rules are not uniform, and it is not desirable to leave it to private companies to define what “freedom of expression” entails.

The German federal government is clearly unprepared for the topic of “Applications of AI-manipulated content for purposes of disinformation”, as shown by the brief parliamentary inquiry submitted by the FDP parliamentary group in December 2019. There is no clearly defined responsibility within the government for the issue and no specific legislation. So far, only “general and abstract rules” have been applied. The replies given by the federal government do not suggest any concrete strategy nor any intentions of investing in order to be better equipped to deal with this issue. In general, the existing regulatory attempts at the German and European level do not appear sufficient to curb the problem of AI-based disinformation. But this does not necessarily have to be the case. Some US states have already passed laws against both non-consensual deepfake pornography and the use of this technology to influence voters.

Accordingly, legislators should create clear guidelines for digital platforms to handle deepfakes in particular, and disinformation in general, in a uniform manner. Measures can range from labelling manipulated content as such and limiting its distribution (excluding it from recommendation algorithms) to deleting it. Promoting media literacy should also be made a priority for all citizens, regardless of age. It is important to raise awareness of the existence of deepfakes among the general public and develop the ability of individuals to analyse audiovisual content – even though it is becoming increasingly difficult to identify fakes. In this regard, it is well worth taking note of the approach taken by the Nordic countries, especially Finland, whose population was found to be the most resilient to disinformation.

Still, there is one thing that we should not do: give in to the temptation of banning deepfakes completely. Like any technology, deepfakes do open up a wealth of interesting possibilities – including for education, film and satire – despite their risks.