This paper presents two major contributions to address the problem of social bias in multimodal large-scale language models (MLLMs). First, we introduce the Comprehensive Counterfactual Dataset (CMSC), which includes 18 diverse and balanced social concepts. CMSC complements existing datasets, enabling a more comprehensive approach to social bias mitigation. Second, we propose a counter-stereotype debiasing (CSD) strategy to mitigate social bias in MLLMs by leveraging the counter-concept of widespread stereotypes. CSD integrates a novel bias-aware data sampling method and loss rebalancing to enhance the model's bias reduction efficiency. Extensive experiments using four major MLLM architectures demonstrate that the CMSC dataset and CSD strategy effectively reduce social bias compared to existing methods, achieving this without compromising overall performance on common multimodal inference benchmarks.