English
Share
Sign In
Subscribe
In the age of artificial intelligence, measures are needed to protect children on social media.
콘텐주
👍
Summation:
Advances in artificial intelligence technology are leading to a surge in harmful content for children.
The main sources are Meta's platforms (Facebook, Instagram, WhatsApp).
The surge in generative AI-driven child abuse content (AIG-CSAM) is further exacerbating the problem.
Preventing misuse of AI tools requires a joint effort by AI developers, platforms, governments, non-profits, law enforcement, and parents.
The problem of harmful content related to children is not a new one, but the seriousness of the problem has increased recently due to the advancement of artificial intelligence technology. According to the National Center for Missing and Exploited Children (NCMEC), 36 million suspected cases and 100 million files were received in 2023 alone. In particular, 85% of these cases occurred on meta platforms such as Facebook, Instagram, and WhatsApp. This is the result of exploiting the high accessibility and wide user base of meta platforms.
What is even more concerning is the rapid increase in child abuse content (AIG-CSAM) created using AI tools that anyone can easily use. Criminals are creating deepfakes by manipulating everyday photos of children or existing harmful content found on the Internet. In June of last year, the FBI warned that AI-generated sexual exposure blackmail cases were on the rise.
The proliferation of AI-generated child abuse content makes it harder to distinguish between genuine abuse that harms real children. NCMEC has added a “generative AI” section to its reporting form, but many reports do not include this information, as it can be difficult to distinguish between AI-generated and genuine content. AI-generated abuse content remains illegal, and possession of it is also a crime.
There are several steps that need to be taken to address this issue. First, AI developers should adopt more rigorous design practices to prevent their tools from being abused to create child-harming content. For example, they should remove harmful data from AI models’ training data and restrict AI models from creating child-harming content. In addition, developers should understand how AI models can be abused and conduct stress tests to prevent this.
Second, platforms should invest more in digital fingerprinting hashing, machine learning algorithms, and AI artifact detection models. These are important for identifying existing child-harming content and detecting new AI artifacts. Platforms like Meta have introduced systems to detect and flag AI-generated content, but there are still many areas where improvement is needed. For example, Meta’s system is mainly focused on detecting benign content, making it difficult to find real child-harming content.
Third, while the government is making efforts such as enacting the REPORT Act, it must provide sufficient funding to organizations such as NCMEC to handle the surge in reports. The REPORT Act requires reporting of all types of child-harming content, but the surge in AI creations is increasing the burden on NCMEC. To address this, the government must provide sufficient resources to organizations such as NCMEC so that they can respond effectively.
Finally, parents should also communicate with their children about online risks and be cautious about sharing family photos. Parents should educate their children about online risks and take steps such as setting their children’s social media profiles to private. Parents should also set their own social media accounts to private and be cautious about posting photos of their children online.
The problem of child harmful content is an urgent issue that we all need to work together to address. As AI technology rapidly advances, our response efforts must evolve accordingly. All stakeholders, including AI developers, platforms, governments, non-profits, law enforcement, and parents, must work together to address the problem of child harmful content. It is time for everyone to work together to maximize the benefits that AI technology can bring to our society while minimizing its negative consequences.
#Child protection #Artificial intelligence #Generative AI #Child harmful content #Cybercrime
Subscribe to '오늘배움'
Grow with Learn Today!
Discover the latest edutech trends and innovative learning solutions. Learn Today Co., Ltd. has established partnerships with various overseas edutech companies and provides only the best services.
By subscribing, you can receive the latest information necessary for future education, including metaverse, AI, and collaboration platforms.
Subscribe to Learn Today today and prepare for tomorrow's education!
Subscribe
👍