Deepfakes pose real threat
February 7, 2024
SEOUL – Sexually explicit fake images of Taylor Swift, widely believed to have been generated by artificial intelligence tools, spread on social media last week at a dizzying pace, deeply alarming government officials, security experts, actors and many others.
The high-profile incident might be blamed on the widely expected side effects of generative AI, which is capable of creating fake photos using real images circulating on the internet. This creates an environment in which “deepfakes,” fake images or videos that appear to be real, are used to scam or threaten innocent victims.
Only a few years ago, cyber swindlers had to at least acquire basic editing techniques to generate even a basic-level fake image. Now, hyper-realistic deepfake photos can be easily created on popular AI-based image programs with a mix of relatively simple prompts.
The improved access to powerful AI-based image and video tools is now gaining momentum and spreading even faster, aided by the network effect of the internet. Therefore, the speed at which deepfake images spread is also becoming faster and faster.
In contrast, authorities and social media companies remain slow to identify and crack down on illegally forged photos. It is telling that a single forged image of Swift was already viewed by so many users on X, formerly Twitter, before the account in question was suspended. Not to mention that her images quickly went beyond X to reach other platforms and were shared by social media users across the world.
Coincidentally, the Swift deepfake case has provided a timely lesson for Korean policymakers and politicians preparing to win seats in the forthcoming parliamentary election in April, as the country’s new law banning