[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Multi-Agent LLMs as Ethics Advocates for AI-Based Systems

Created by
  • Haebom

Author

Asma Yamani, Malak Baslyman, Moataz Ahmed

Outline

This paper proposes a framework for integrating ethics into the requirements elicitation process to build ethically compliant systems. To address the time and resource constraints of the existing manual ethical requirements elicitation method, we introduce an ethics advocate agent in a multi-agent LLM environment to generate draft ethical requirements. Through two case studies, we demonstrate that the proposed framework captures most of the ethical requirements derived from researchers’ interviews and suggests additional requirements, but we highlight the reliability issues of ethical requirements generation and the need for human feedback. Ultimately, we expect that this study will facilitate the development of ethically compliant products by more broadly applying ethics to the requirements engineering process.

Takeaways, Limitations

Takeaways:
A framework for automatically generating ethical requirements using multi-agent LLM
Provides an effective approach to solving time and resource constraints problems
Suggesting the possibility of improving the efficiency of the process of deriving ethical requirements
Provides a practical methodology for developing ethically compliant systems
Limitations:
There is a reliability issue in generating ethical requirements.
Emphasize the essential role of human feedback as it is a sensitive area
High dependence on LLM performance
Further research is needed on generalizability to different ethical contexts.
👍