Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Multi-Agent LLMs as Ethics Advocates for AI-Based Systems

Created by
  • Haebom

Author

Asma Yamani, Malak Baslyman, Moataz Ahmed

Outline

This paper proposes a framework that integrates ethics into the requirements elicitation process to build ethically aligned systems. To address the challenges of manual ethical requirements elicitation, which requires gathering diverse input from multiple stakeholders, we present a framework for generating draft ethical requirements by introducing an ethical advocate agent in a multi-agent LLM environment. This agent provides critiques and comments on ethical issues based on system descriptions. We evaluate the proposed framework through two case studies, demonstrating that it captures most of the ethical requirements identified by researchers in a 30-minute interview and suggests several additional relevant requirements. However, it highlights reliability issues in ethical requirements generation, highlighting the need for human feedback in this sensitive area.

Takeaways, Limitations

Takeaways:
A framework for automatically generating ethical requirements using multi-agent LLM is presented.
Contributing to solving the problem of gathering opinions from various stakeholders, which is difficult due to time and resource constraints.
Suggesting the possibility of increasing efficiency in the process of deriving ethical requirements.
Proposing a practical approach to developing ethical systems
Limitations:
The existence of reliability issues in generating ethical requirements
Human feedback is essential in sensitive ethical areas.
High dependence on LLM performance
Further research is needed to determine generalizability across diverse ethical contexts.
👍