The digital and analog worlds are becoming increasingly blurred. With the rapid development of artificial intelligence and machine learning, the information we receive is becoming increasingly complex. Nevertheless, we are finding that what we humans find very easy – such as facial recognition – first has to be taught to artificial intelligence. This so-called deep learning is a very complex process. Put simply, the algorithm breaks down the complex structures of the object into individual hierarchically structured concepts. This is how the machine “learns” to recognize and interpret complex structures. And even – to manipulate them. This makes the transition between real and fake news fluid. For example, today we encounter image manipulation in the form of deepfake. This term is borrowed from Deep Learning denotes a deep identity fraud: with the help of state-of-the-art AI-supported software, it is possible to fake images, soundtracks, and even entire videos. Deceptively real.
Media manipulation through deepfake
At first, using AI-based systems to create new identities and stories sounds quite exciting and entertaining. A bit like Sims back then, only much more real and with better graphics. On the Internet, you can find some sites where you can create freely invented faces. Why not give them a story as well? At the same time, the boundaries between reality and lies are blurred here.
At the same time, this form of artificial intelligence shows us how easily our media can be manipulated. And how difficult it is for us to distinguish manipulated recordings from real ones. This has an impact on how we deal with media. Because if we can’t be sure that the image we’re shown is real, what are we supposed to believe? Which part of the information offered is genuine, which possibly cleverly faked?
Depending on how we evaluate a piece of information, this can influence our decisions. This starts with credibility in our private lives, but can ultimately change our political landscape as well.
Especially in the area of cyberstalking against private individuals and celebrities, deepfake has already made inroads. Through clever video manipulation, an alternative story can be attributed to anyone, a private person or a public figure. In the form of Revenge Porn, fake content is found that imputes an apparent past to people – even if in reality a video has been manipulated or even reinvented using Deepfake. In the end, this not only damages reputations, but also the mental health of those affected.
Social Engineering 2.0
Deepfake is also finding new dangerous methods in the area of white-collar crime. Social engineering attacks that have a personal connection to the victim are already particularly successful. Time pressure, pressure to perform, or hierarchical constraints often lead to the successful disclosure of identification features or internal information. But how much more successful is a CEO fraud in which voice swapping (=imitation of voices using deepfake) is used? In other words, when the supposed CEO on the phone actually sounds like the CEO or even appears in a video conference. It is then no longer possible for people to distinguish between a fake and a real telephone call.
The end of biometric credentials?
Ultimately, the development that any image, video, or voice recording can be manipulated also has an impact on the possibilities of logging in with biometric data. After all, how secure are biometric logins via FaceID if really anyone can forge an image? The answer is nevertheless reassuring: Compared to passwords or other character-based login methods, biometric authentication is comparatively secure. However, there is still a residual risk, which is why you should never rely on just one authentication method. Only the interaction of at least two authentication methods makes a login secure – both privately and professionally.