As large-scale language models (LLMs) are increasingly deployed in autonomous decision-making roles in high-stakes domains, this paper investigates whether models trained on human-generated data can inherit cognitive biases (e.g., overinvestment) that systematically distort human judgment. The study examined the expression of LLM biases in four experimental conditions: model as investor, model as advisor, multi-agent consultation, and mixed pressure scenarios using an investment task. Results from 6,500 trials revealed that LLM biases were highly context-dependent. While individual decision contexts exhibited rational cost-benefit logic, multi-agent consultations exhibited prominent hierarchical effects. Specifically, symmetric peer-based decision-making resulted in overinvestment in almost all cases. Similarly, overinvestment rates were high under organizational and individual pressures. These results demonstrate that LLM biases are not inherent but are highly dependent on the social and organizational context, providing important insights for the deployment of multi-agent systems and unsupervised operations where such conditions can naturally occur.