AI fueling a deepfake porn crisis in South Korea
It’s difficult to talk about artificial intelligence without talking about deepfake porn – a harmful AI byproduct that has been used to target everyone from Taylor Swift to Australian school girls.
But a recent report from startup Security Heroes found that out of 95,820 deepfake porn videos analyzed from different sources, 53% featured South Korean singers and actresses – suggesting this group is disproportionately targeted.
So, what’s behind South Korea’s deepfake problem? And what can be done about it?
Deepfakes are digitally manipulated photos, video or audio files that convincingly depict someone saying or doing things they never did. Among South Korean teenagers, creating deepfakes has become so common that some even view it as a prank. And they don’t just target celebrities.
On Telegram, group chats have been made for the specific purpose of engaging in image-based sexual abuse of women, including middle-school and high-school students, teachers and family members. Women who have their pictures on social media platforms such as KakaoTalk, Instagram and Facebook are also frequently targeted.
The perpetrators use AI bots to generate the fake imagery, which is then sold and/or indiscriminately disseminated, along with victims’ social media accounts, phone numbers and KakaoTalk usernames. One Telegram group attracted some 220,000 members, according to a Guardian report.
Lack of awareness
Despite gender-based violence causing significant harm to victims in South Korea, there remains a lack of awareness on the issue.
South Korea has experienced rapid technological growth in recent decades. It ranks first in the world in smartphone ownership and is cited as having the highest internet connectivity. Many jobs, including