In this paper, we propose LagKV, a novel KV compression strategy that does not rely on attention weights, to address the issue of increasing key-value (KV) cache size in long-text inference for large-scale language models. While existing attention-weighted methods require major modifications of the inference infrastructure and significant computational overhead, LagKV achieves efficient compression without any attention computation by simply using comparisons between KVs. On the RULER benchmark results, LagKV outperforms SnapKV and StreamingLLM, and in particular, it outperforms the attention-weighted method $H_2O$ by more than 50% on the 64-character password retrieval task. The source code is available on GitHub.