Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Do LLMs Adhere to Label Definitions? Examining Their Receptivity to External Label Definitions

Created by
  • Haebom

Author

Seyedali Mohammadi, Bhaskara Hanuma Vedula, Hemank Lamba, Edward Raff, Ponnurangam Kumaraguru, Francis Ferraro, Manas Gaur

Outline

To investigate whether LLM truly incorporates external definitions or primarily relies on parameter knowledge, we conducted controlled experiments on several explanatory benchmark datasets and various label definition conditions (including expert curation, LLM generation, transformation, and exchanged definitions). The results show that while explicit label definitions can improve accuracy and explainability, their integration into LLM's task-solving process is neither guaranteed nor consistent. This suggests that in many cases, they rely on implicit representations. Models often default to internal representations, especially for general tasks. In contrast, domain-specific tasks benefit more from explicit definitions.

Takeaways, Limitations

Explicit label definitions can improve accuracy and explainability.
External definitions are not consistently integrated into the LLM's task-solving process.
In typical tasks, models tend to prioritize internal representations.
Domain-specific tasks benefit more from explicit definitions.
A deeper understanding of how LLMs address pre-trained skills and external knowledge is needed.
👍