This paper proposes DUAL-Health, a multimodal fusion framework that considers uncertainty for health monitoring in outdoor environments. Existing static multimodal deep learning frameworks require extensive training data and have limitations in capturing subtle health status changes. In contrast, multimodal giant language models (MLLMs) enable robust health monitoring by fine-tuning information-rich models pre-trained on small amounts of data. However, MLLM-based outdoor health monitoring faces challenges such as noise in sensor data, difficulties in robust multimodal fusion, and difficulties in recovering missing data due to modes with varying noise levels. DUAL-Health addresses these challenges by quantifying the impact of noise in sensor data, performing efficient multimodal fusion using uncertainty-based weights, and aligning modal distributions within a common semantic space. Experimental results demonstrate that DUAL-Health demonstrates higher accuracy and robustness than existing methods.