This paper addresses the parameter failure problem, which limits the effectiveness of the tool-agent paradigm in extending the capabilities of large-scale language models (LLMs). First, we construct a classification scheme containing five parameter failure categories derived from the call chains of the main tool-agents. By applying 15 input perturbation methods, we explore the correlations between three different input sources and the failure categories. The experimental results show that parameter name hallucination failures mainly come from the inherent limitations of LLMs, while input source problems mainly cause other failure patterns. In order to improve the reliability and effectiveness of tool-agent interactions, we propose several improvement suggestions, including standardizing tool return formats, improving error feedback mechanisms, and ensuring parameter consistency.