Unlike previous research on prompt compression for large-scale language models (LLMs), which primarily focuses on methods that sacrifice semantic information, this paper presents a task-independent, lossless compression technique similar to LZ77. On two evaluation tasks, we demonstrate that the proposed technique reduces input token sequence lengths by 27% and 18%, respectively. Furthermore, the use of a transformer-based LLM reduces encoding computation by 47% and 33%, respectively, due to the quadratic nature of attention. We emphasize that token sequence transformations are easily reversible, with no loss of semantic information. We evaluate the proposed method on two tasks requiring precise preservation of semantic/syntactic information, and demonstrate that existing lossy compression methods underperform in these settings. The lossless technique exhibits a small performance difference compared to uncompressed inputs, and we expect the performance difference to disappear entirely with larger models and increased computational budgets.