South Korean Police Deploy Deepfake Detection Tool in Run-up to Elections

We Keep you Connected

South Korean Police Deploy Deepfake Detection Tool in Run-up to Elections

Breaking cybersecurity news, news analysis, commentary, and other content from around the world, with an initial focus on the Middle East & Africa.
The nation’s battle with political deepfakes may be a harbinger for what’s to come in elections around the world this year.
March 8, 2024
Amid a steep rise in politically motivated deepfakes, South Korea's National Police Agency (KNPA) has developed and deployed a tool for detecting AI-generated content for use in potential criminal investigations.
According to the KNPA's National Office of Investigation (NOI), the deep learning program was trained on approximately 5.2 million pieces of data sourced from 5,400 Korean citizens. It can determine whether a video (which it hasn't been pretrained on) is real or not in only five to 10 minutes, with an accuracy rate of around 80%. The tool auto-generates a results sheet that police can use in criminal investigations.
As reported by Korean media, these results will be used to inform investigations but will not be used as direct evidence in criminal trials. Police will also make space for collaboration with AI experts in academia and business.
AI security experts have called for the use of AI for good, including detecting misinformation and deepfakes.
"This is the point: AI can help us analyze [false content] at any scale," Gil Shwed, CEO of Check Point, told Dark Reading in an interview this week. Though AI is the sickness, he said, it is also the cure: "[Detecting fraud] used to require very complex technologies, but with AI you can do the same thing with a minimum amount of information — not just good and large amounts of information."
While the rest of the world waits in anticipation of deepfakes invading election seasons, Koreans have already been dealing with the problem up close and personal.
The canary in the coal mine occurred during provincial elections in 2022, when a video spread on social media appearing to show President Yoon Suk Yeol endorsing a local candidate for the ruling party.
This type of deception has lately become more prevalent. Last month, the country's National Election Commission revealed that between Jan. 29 and Feb. 16, it detected 129 deepfakes in violation of election laws — a figure that is only expected to rise as its April 10 Election Day approaches. All this in spite of a revised law that came into effect on Jan. 29, stating that use of deepfake videos, photos, or audio in connection with elections can earn a citizen up to seven years in prison, and fines up to 50 million won (around $37,500). 
Check Point's Shwed warned that, like any new technology, AI has its risks. "So yes, there are bad things that can happen and we need to defend against them," he said.
Fake information is not as much the problem, he added. "The biggest issue in human conflict in general is that we don't see the whole picture — we pick the elements [in the news] that we want to see, and then based on them make a decision," he said.
"It's not about disinformation, it's about what you believe in. And based on what you believe in, you pick which information you want to see. Not the other way around."
Nate Nelson, Contributing Writer

Nate Nelson is a freelance writer based in New York City. Formerly a reporter at Threatpost, he contributes to a number of cybersecurity blogs and podcasts. He writes "Malicious Life" — an award-winning Top 20 tech podcast on Apple and Spotify — and hosts every other episode, featuring interviews with leading voices in security. He also co-hosts "The Industrial Security Podcast," the most popular show in its field.
You May Also Like
Assessing Your Critical Applications’ Cyber Defenses
Unleash the Power of Gen AI for Application Development, Securely
The Anatomy of a Ransomware Attack, Revealed
How To Optimize and Accelerate Cybersecurity Initiatives for Your Business
Building a Modern Endpoint Strategy for 2024 and Beyond
Cybersecurity’s Hottest New Technologies – Dark Reading March 21 Event
Black Hat Asia – April 16-19 – Learn More
Black Hat Spring Trainings – March 12-15 – Learn More
Industrial Networks in the Age of Digitalization
Zero-Trust Adoption Driven by Data Protection
How Enterprises Assess Their Cyber-Risk
AI-Driven Testing: Bridging the Software Automation Gap
The Rise of the No-Code Economy
The State of Incident Response
A Solution Guide to Operational Technology Cybersecurity
Endpoint Best Practices to Block Ransomware
2023 Snyk AI-Generated Code Security Report
Understanding AI Models to Future-Proof Your AppSec Program
Cybersecurity’s Hottest New Technologies – Dark Reading March 21 Event
Black Hat Asia – April 16-19 – Learn More
Black Hat Spring Trainings – March 12-15 – Learn More
Copyright © 2024 Informa PLC Informa UK Limited is a company registered in England and Wales with company number 1072954 whose registered office is 5 Howick Place, London, SW1P 1WG.