Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Responsible AI Technical Report

Created by
  • Haebom

Author

KT, :, Soonmin Bae, Wanjin Park, Jeongyeop Kim, Yunjin Park, Jungwon Yoon, Junhyung Moon, Myunggyo Oh, Wonhyuk Lee, Dongyoung Jung, Minwook Ju, Eunmi Kim, Sujin Kim, Youngchol Kim, Somin Lee, Wonyoung Lee, Minsung Noh, Hyoungjun Park, Eunyoung Shin

Outline

KT has developed a Responsible AI (RAI) assessment methodology and risk mitigation technology to ensure the safety and reliability of AI services. By analyzing the implementation of the Framework Act on AI and global AI governance trends, the company has established a unique approach to regulatory compliance and systematically identifies and manages all potential risks from AI development to operation. Based on KT's AI risk classification system tailored to the domestic environment, the company presents a reliable assessment methodology that systematically verifies the safety and robustness of models and provides practical tools to manage and mitigate identified AI risks. Furthermore, the company has released its proprietary technology, SafetyGuard (Guardrail), which blocks harmful responses in AI models in real time, supporting the enhancement of the safety of the domestic AI development ecosystem. The findings of this study are expected to provide valuable insights to organizations seeking to develop responsible AI.

Takeaways, Limitations

Takeaways:
Presenting a systematic RAI evaluation methodology to ensure the safety and reliability of AI services.
Establishing an AI risk classification system suitable for the domestic environment
Providing practical tools for AI risk management and mitigation.
Development and release of our own technology, SafetyGuard, to block harmful responses.
Providing organizations with useful insights for responsible AI development.
Limitations:
There is no specific mention of Limitations in the paper (based on the Abstract content)
👍