English
Share
Sign In
The real dangers of artificial intelligence from the co-founder of OpenAI
Haebom
1
👍
OpenAI, like Professor Andrew Ng, rarely talks about regulation. However, they try to talk about AI ethics very seriously and meaningfully. In reality, depending on the user, AI is something that can be written badly, and the media and other places may be defensive rhetoric that comes from looking for provocative material...
MIT Technology Review recently published an interview with Ilya Sutskever, co-founder and chief scientist at OpenAI, in which he discusses his hopes and fears for AI and his vision for it.
The Present and Future of Artificial Intelligence
Naturally(?), Ilya Sutskever is very positive about the current state of AI. He says that technologies like ChatGPT have already raised the expectations of many people, and that “things that we didn’t think would happen will happen sooner.” However, along with this positive outlook, he also warns about the dangers that AI can bring.
The Success and Limitations of ChatGPT
ChatGPT is a conversational AI developed by OpenAI. This technology can not only answer simple questions, but also have natural conversations with people. However, there are also risks that this technology can bring. For example, there is a possibility of providing incorrect information, and the risk of personal information leakage. In fact, this is not a limitation of chatGPT, but rather a limitation of current LLMs.
Discussion of Artificial General Intelligence (AGI)
Sutskever points out that there is a growing discussion about artificial general intelligence (AGI). AGI is an artificial intelligence that can perform all tasks like a human. He believes that if this AGI becomes a reality, it will be able to solve problems such as health care and climate change. If AGI becomes a reality, it will be possible to automate all processes from hospital treatment to drug development. This will greatly improve the efficiency of health care and allow for the fast and accurate treatment of many diseases.
Ethical Issues of Artificial Intelligence
But Sutskever also warns of the dangers that AI advancements could bring. He argues that we need to be prepared for the possibility of AI becoming smarter than humans, which he calls “superintelligence.”
Example) Superintelligence and Ethics
If superintelligence becomes a reality, the risks it could bring will increase. For example, there is a possibility that superintelligence could harm people for its own purposes or destroy infrastructure around the world. Therefore, it is important to develop safe artificial intelligence by considering these risks in advance.
Subscribe to 'haebom'
📚 Welcome to Haebom's archives.
---
I post articles related to IT 💻, economy 💰, and humanities 🎭.
If you are curious about my thoughts, perspectives or interests, please subscribe.
Would you like to be notified when new articles are posted? 🔔 Yes, that means subscribe.
haebom@kakao.com
Subscribe
1
👍