This paper discusses recent research trends in applying large-scale language models (LLMs) to sequential recommendation. Existing LLM-based methods fail to fully utilize the rich temporal information inherent in a user's past interaction sequences. This is because LLM's self-attention mechanism inherently lacks sequence information and relies on positional embeddings, which are less suitable for user interaction sequences than natural language. To address these limitations, we propose a Counterfactual Enhanced Temporal Framework for LLM-Based Recommendation (CETRec) , which separates and measures the influence of temporal information based on causal inference principles . CETRec effectively enhances LLM's understanding of both absolute order (interaction times with items) and relative order (sequential relationships between items) by utilizing counterfactual adjustments derived from causal analysis. We demonstrate the effectiveness of CETRec through extensive experiments on real-world datasets.