English
Share
Sign In
Deepfake porn is a big problem in South Korea
Haebom
2
👍👍🏻
12
Created by
  • Haebom
Created at
In South Korea, there is a culture that considers being on the KBS 9o'clock News as an honor. In fact, the news is often a sign that something has happened, so whether it's good or bad, it's a hot topic. Especially for public broadcaters like KBS, there is a strong screening process for the presenters and reporter. Before the long-form broadcasting, the KBS 9o'clock News had such an impact that it was watched by the entire country. Anyway, I went to KBS News as an expert on vision detection and content distribution.
I talked to people who create and distribute deepfake videos with K-Pop idols, and deepfake pornography targeting acquaintances, about how one-dimensional, specific products (e.g., Telegram) or sobriey behavior(e.g., take down photo in SNS) rarely work. it's become overwhelmingly easy to create, so it's not realistic to monitor and catch everything that's being mand, whether it's bening made alone in a disconnected emvironment or as a a service. Through awareness and education, we need to get the point across that it's not the porn is bad, but tha creating false information the belongs to someone else is wrong. Pornography is the most inflammatory part of the deepfake debate, and ther are plenty of other examples. Focusing on porn will only lead to weird regulations that are hard to understand, like Game shutdown Act.
Deepfake refers to various aspects, such as creating AI cover songs or using certain people as caricatures or memes, which fall under the scope of this technology. In order to say what is right and what is wrong, we need to establish standards and systems for it. In this report and reporting stage, I talked about the production of deepfake adult content and detection of reproduction.
In fact, datasets uploaded to Kaggle, Huggingface, Github, etc. and open source Facefusion or Deep Live Cam can be used immediately by just reading YouTube or the corresponding Readme file, and there is no difficulty in creating them if you have a certain level of GPU and CPU performance. Additionally, those that are available as services are even easier. I have actually explained several ways to detect deepfakes, but these are post-measures, so the key is to block distribution and reproduction of content. In order to resolve this, the Republic of Korea is receiving reports to the Korea Communications Commission, the National Police Agency, etc., but they are not very active in dealing with reports made by people other than the parties involved.
This is also understandable, but if we block or restrict all reports without warning, it can be abused or can become indiscriminate regulation (ex. uploading deepfake porn to websites or bulletin boards that people dislike), and it will take a considerable amount of time to actually take action on this .
In particular, if the person involved has to report, it is more difficult for celebrities or people with high social status. Even those who did not know about the existence of deepfake porn can see it and search for it, and there are many who try to use it . Therefore, at the time, I said that not only the method of creating content, but also an institutional and educational approach was needed, and in the process, I showed that production was possible without much difficulty on my personal Macbook Air (M2) and that it was possible on high-spec computers commonly found in PC rooms.
I reported it last week (30th) but it is still "pending acceptance"
I personally tried reporting through several channels, but even after 10 days of reporting, the site is still accessible, and deepfake porn videos continue to be downloaded and spread.
Among them, seeing the "Deepfake Sexual Crime Video Response Specialist and Discussion" held like the one on the left, watching the live YouTube broadcast and follow-up press release, I wondered why they were talking about deepfake sexual crime videos, and why they were talking about textbook-like topics when they are not experts in video production, detection, or distribution. I don't know if they really want to solve it or if they just want to pretend to solve it. Will it be over by blocking Telegram or specific IPs?
There are celebrities in the US, Japan, Europe, China, and India, so why are only Korean celebrities being targeted? Is it because there are so many data sets? That may be one reason, but the biggest reason is institutional weakness. There are no effective laws, and what exists is punishment through the existing Act on the Protection of Children and Youth from Sexual Abuse, punishment based on the Act on Promotion of Information and Communications Network Utilization and Information Protection, etc. , and lastly, punishment is possible under Article 307 of the Criminal Act for defamation ... All three have very strict requirements for establishment. First of all, the fact that the person involved has to report it is a hurdle, and the same goes for other things.
As I pointed out in my previous article, AI-related bills and discussions are being passive and are only mentioned as a formality. The government, regardless of whether they are ruling or opposition, should rush into the science and technology sector, but here too, they are judging the ruling and opposition parties and subjectively applying the standards of right and wrong, so only the silent science and technology will be punished. Even the laws hastily enacted in response to this incident are extremely ambiguous. Punishing the perpetrators is an area of excessive regulation, so there is a lot of room for abuse, and strengthening platform responsibility is already being done by Kakao, Naver, etc., and they are saying that they will punish overseas platforms... It is broad and ineffective.
The only thing that is effective is victim support, but this is just a fancy name, and they are just trying to create another system by reporting together. They should start by fixing and supplementing the problems with the existing system, but they are just in a hurry to create something new.
'Deepfake' bills pouring in... "Need a new law to cover illegal synthetic material crimes" [Han Gye-rae]
As an aside, the reason why I was particularly active in this reporting and incident is that, even during the recent general elections, it was difficult to understand why the people who created the standards for operating laws related to deepfake videos by splitting the National Election Commission so much were belatedly paying attention to the actual victims and those who were in a situation where they could not deal with them. If you look it up, you will see that similar issues arose in 2019 and 2021, and several articles were written at that time, but even then, they did not properly resolve the issue and just swept it under the rug, saying things like they would create a reporting center and do all sorts of things.
If you look at the 2019 article, Twitter (X), etc., you can see that it is a problem that has been around since 2017. It is an issue that has been around since GAN came out.
Anyone can buy a knife, and everyone knows the concepts of "sharp" and "stabbing." But when a knife incident, a murder or robbery using a knife occurs, no one criticizes the knife, sharpness, stabbing, etc. What we should be teaching is the moon, but everyone is interested in fingers, so it seems like a problem. I hope this incident doesn't lead to another case of locking the barn door after the horse has bolted.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
2
👍👍🏻
12
    D
    Dark apricot morning
    결국 피해자는 여자, 여혐 없다던 사함들 난리 치겠네요. 남돌, 남자 딥페 포르노는 없고 여돌 포르노만 가득~
    인내심 있는 초록 그림자
    사이트 정보 좀
Haebom
👍
4