This paper presents LLM-ABBA, a novel method for integrating large-scale language models (LLMs) into various time-series tasks. It symbolizes time series using the adaptive Brownian bridge-based symbolic aggregation (ABBA) method and leverages LLM's existing tokens to exploit the semantic information hidden in the time series. LLM-ABBA achieves state-of-the-art performance on UCR and three medical time-series classification tasks, and new state-of-the-art performance on the Time Series Extrinsic Regression (TSER) benchmark. Furthermore, it introduces a fixed polygon chain technique to mitigate cumulative errors during the prediction process, thereby improving prediction performance. This framework is expected to be extendable to other time-series tasks.