This paper systematically analyzes the impact of inherent bias in a multi-agent extension of the LLM-as-Judge approach (multi-agent argumentation and meta-evaluation) that uses large-scale language models (LLMs) as evaluators. By evaluating four types of bias (position bias, detail bias, thought process bias, and consensus bias) in both the multi-agent argumentation and LLM-as-Meta-Judge frameworks, we find that the argumentation framework significantly amplifies and persists bias after the initial argumentation, while the meta-evaluation approach is more resistant to bias. In addition, we show that adding an unbiased agent using PINE, a single-agent bias reduction method, is effective in reducing bias in the argumentation setting, but less effective in the meta-evaluation setting. In conclusion, this study comprehensively studies the behavior of bias in the multi-agent LLM-as-Judge system and highlights the need for targeted bias mitigation strategies in collaborative evaluation settings.