In this paper, we propose MedSyn, a hybrid human-AI framework based on multi-level interactions between physicians and large-scale language models (LLMs) to address cognitive bias, information insufficiencies, and ambiguous cases in complex decision-making processes in healthcare settings. MedSyn overcomes the limitations of existing static decision support tools by enabling dynamic interactions in which physicians challenge the LLM’s suggestions and the LLM presents alternative perspectives. Through simulated physician-LLM interactions, we evaluate the potential of the open-source LLM as a real physician assistant, and the results show that the open-source LLM is promising. In the future, we plan to further verify MedSyn’s effectiveness in improving diagnostic accuracy and patient outcomes through interactions with real physicians.