English
Share
Sign In
There is a need for labeling of content created by artificial intelligence.
Haebom
👍
Created by
  • Haebom
Created at
U.S. Rep. Adam Schiff has proposed a new bill that would require AI companies to be more transparent. The bill, called the Generative AI Disclosure Act, would require AI systems to disclose all copyrighted works used to train their systems. Specifically, it would require AI systems to submit dataset URLs, along with a list of works that reference the training dataset, 30 days before the system is released. This URL would give creators a chance to verify that their work has been used and, if necessary, seek compensation.
What are the main contents of the bill?
Copyright disclosure requirement: AI companies should notify the copyright registry of all copyrighted works used before releasing new generative AI systems.
Retroactive application of the bill: The bill would apply to all AI systems currently in use and those developed in the future.
Public Database: All notices are maintained in a publicly accessible online database.
Legal sanctions: AI creators who violate the law may face civil fines of up to $5,000.
Significance and support of the bill:
Rep. Adam Schiff emphasized that this bill protects the rights of creators and respects creativity in the age of AI. In addition, leaders of several creator groups, including Meredith Stiehm, president of the Writers Guild of America West, supported this bill and evaluated it as an essential first step to protect the rights of creators.
Expected impact
This bill will set clear regulations on unauthorized use of creative works, as the pace of AI technology development is currently outpacing existing copyright laws. AI companies will no longer be able to use creative works secretly, and all use must be transparently recorded. This will likely play a key role in achieving a balance between the advancement of AI technology and copyright protection in the future.
Overall, this bill aims to combine technological progress and fairness to ensure that creators are clear about how their work is being used in AI training datasets and that their rights are properly protected. These measures are expected to contribute to setting legal and ethical standards for future AI-related copyright issues.
Of course, it would be best if the creators themselves took care of this, but since that is practically impossible, the way forward is to verify it at the stage of creation or distribution. In Korea, voices are gradually being heard from the Korea Copyright Association and various other places, and now that the general election is over, I think the political world should also have some serious discussions.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
👍