To investigate whether LLM truly incorporates external definitions or primarily relies on parameter knowledge, we conducted controlled experiments on several explanatory benchmark datasets and various label definition conditions (including expert curation, LLM generation, transformation, and exchanged definitions). The results show that while explicit label definitions can improve accuracy and explainability, their integration into LLM's task-solving process is neither guaranteed nor consistent. This suggests that in many cases, they rely on implicit representations. Models often default to internal representations, especially for general tasks. In contrast, domain-specific tasks benefit more from explicit definitions.