This paper presents the first empirical study to determine whether the scaling laws of large-scale language models (LLMs) can be applied to electronic health record (EHR)-based models. Using patient time-series data from the MIMIC-IV database, we trained Transformer architectures with various model sizes and compute budgets. We observed consistent scaling patterns, including a quadratic IsoFLOPs curve and a power-law relationship between computation, model parameters, data size, and clinical utility. This demonstrates that EHR models exhibit similar scaling behavior to LLMs, providing predictive insights for resource-efficient training strategies. Consequently, this study lays the foundation for developing robust EHR-based models that can transform clinical prediction tasks and advance personalized medicine.