This paper presents HiddenObject, a novel fusion framework for detecting hidden or partially occluded objects in multimodal environments. HiddenObject integrates RGB, thermal, and depth data using a Mamba-based fusion mechanism. It captures complementary signals from each modality to enhance detection of occluded or camouflaged targets and fuses modal features into a unified representation that generalizes well across a variety of scenarios. It demonstrates superior or competitive performance compared to existing methods on several benchmark datasets, suggesting that the Mamba-based fusion architecture has the potential to significantly advance multimodal object detection in visually degraded or complex environments.