This paper proposes CauKer, a novel algorithm for efficient pretraining of time series-based models (TSFMs) without the need for computationally expensive pretraining using large-scale real-world time series data. CauKer combines Gaussian Process (GP) kernel synthesis with Structural Causal Models (SCMs) to generate diverse and causally consistent synthetic time series data with realistic trends, seasonality, and nonlinear interactions. It generates data for efficient pretraining of state-of-the-art classification TSFMs with diverse architectures and pretraining methods. We experimentally demonstrate that, unlike real-world datasets, it exhibits a clear scaling law with respect to both dataset size (10,000 to 10 million samples) and model capacity (1 million to 783 million parameters).