DeepFakes: a type of AI that creates fake images and videos, have become more convincing and widespread than ever before. Unlike simple Photoshop, this tech can realistically put anyone’s face onto someone else’s body, often in disturbing ways. While some DeepFakes are made for fun, the vast majority – around 96% – target women in adult content as a tool for revenge. Early flaws like unnatural blinking or weird skin textures are constantly being fixed, making them harder to spot. And although there are some positive uses, the darker side of DeepFakes is sparking an important debate: should this technology be available to anyone online?
Firstly, these videos are different because you can photoshop anyone’s face onto anyone doing anything – and more convincing and common than any other Artificial Intelligence before. There’s plenty of satire and mischief, but the DeepFakes, unfortunately, consist of obscenity, with a staggering 96% being pornographic. According to a Deeptrace lab report, 99% of those mapped faces are women, especially schoolchildren, but ranges from celebrities to randomly pulled pictures online. What makes Deepfakes even more disturbing is that unskilled people can easily make them from only a handful of photos, and these fake videos have spread almost everywhere on the internet.
How? Well, Deepfakes are mostly created on high-end desktops with powerful graphics cards or computing power in the cloud. It’s become quite easy to spot, though, due to bad lip-syncing, terrible skin textures, and badly rendered fine details such as hair, jewellery, teeth, and inconsistent illumination on the iris. Speaking of irises, the best giveaway would be that Deepfakes don’t blink normally due to the majority of images showing people with their eyes open. Naturally, as soon as this was discovered, companies tried their hardest to remove these defects, and even introduced mobile phone apps that let users add their faces easier. This is where the danger lies – as soon as a weakness is revealed- it is fixed. One might argue that this very article is dangerous, but that’s a risk that has to be taken (and an argument for another day…).
Although – the way that University researchers and special effects studios have pushed the boundaries of what’s possible with video manipulation, is indeed something incredible and very well should be applauded. The writer of this article is no computer scientist, but there is some really fascinating technology behind this. The encoders find similarities between two faces and reduce them to their shared common features compressing the images together, but this has to be done every frame for a convincing video. Then there are algorithms within DeepFake, and it can even be fed feedback on performance. Given enough, they can produce realistic faces of someone who was not originally in the video.
However, it’s the availability of the technology that can be malicious. It is no surprise that Deepfakes have wreaked havoc all over the world. In fact, the very undermining of trust due to synthetic media, is what can create and force a society where people cannot distinguish truth from falsehood. Deepfakes could mean trouble for the courts; where faked events could be entered as evidence, and has already been causing extreme personal security risks as mentioned before. Whilst some are not always malicious, such as voice cloning deepfakes restoring people’s voices who lost theirs to disease (as a very specific example) there are simply too many risks in the technology’s accessibility.
Personally, I find this so unimaginably disappointing – the world was given a life changing innovation which could have had such a positive impact, but instead, almost 100 000 Deepfake adult videos have targeted unsuspecting South Korean women and girls, and political parties have used AI versions of Taylor Swift to campaign for themselves. So, I’ll leave the question of whether or not DeepFakes should be available to everyone, to you!
One last note – Switzerland has clear laws on the publication of manipulated videos due to the violations of personal rights, and any person affected by the deepfake can take action against all persons involved in the violation according to Swiss Civil Code. Moreover, an offence can arise due to damage to the reputation of the person concerned by the Deepfake statements, or if the person concerned is wrongly accused of a fake event. Videos and images can no longer be used as evidence in court cases or as sources by journalists without careful examination of digital software (however it is not so perfect that it can identify every Deepfake). It’s recommended by the Swiss authorities that people affected by deepfakes file a criminal complaint ensuring that they won’t be liable in the future for any action/statement made in the Deepfake. Worldwide, deepfake’s legal standing is tricky and changing, as many current laws were not made with deepfakes in mind. In the EU, malicious uses of deepfakes are prohibited, yet as of October 2024, it is likely impossible to prevent Deepfakes from being created due to how accessible they are.