This paper proposes Logsparse Decomposable Multiscaling (LDM), a novel framework for achieving both efficiency and effectiveness in long-term time series forecasting. To address the problem of existing models overfitting to long input sequences, LDM reduces non-stationarity by separating patterns across different scales within the time series, improves efficiency through compressed long input representations, and simplifies the architecture through clear task allocation. Experimental results demonstrate that LDM outperforms existing models on long-term forecasting benchmarks, while also reducing training time and memory costs.