This paper addresses bias, particularly age bias, in large-scale language models (LLMs) and vision-augmented LLMs (VLMs) in pediatric medical informatics, diagnosis, and decision support. We highlight that existing models underperform on pediatric question-answering tasks, arguing that this underperformance stems from the limited resources and representativeness of pediatric research. To address this, we present PediatricsMQA, a novel multimodal pediatric question-answering benchmark comprised of 3,417 text-based questions spanning seven developmental stages (fetal to adolescence) and 2,067 visual-based questions based on 634 pediatric images obtained from 67 imaging modalities. Results from an evaluation of the latest open models reveal a significant performance degradation in younger age groups, highlighting the need for age-sensitive approaches to support fair AI in pediatric healthcare.