This paper addresses the growing deployment of large-scale language models (LLMs) in autonomous decision-making roles in high-stakes domains. As such, models trained on human-generated data may inherit cognitive biases (e.g., overcommitment) that systematically distort human judgment. To investigate whether these biases are consistent in LLMs or require specific conditions, we conducted 6,500 trials across four experimental conditions (model as investor, model as advisor, multi-agent consultation, and mixed pressure scenarios) using an investment task. The results reveal that the LLM bias is highly context-dependent. While it exhibits rational cost-benefit logic in individual decision-making situations, in multi-agent consultation situations, it exhibits moderate overcommitment in asymmetrical hierarchies and near-total overcommitment in symmetrical peer-based decision-making. Furthermore, it exhibits high levels of overcommitment under both organizational and individual pressures. In conclusion, the LLM bias is not unique but strongly dependent on social and organizational contexts, with important implications for multi-agent systems and unsupervised operational deployments where such conditions can naturally arise.