In this paper, we propose to apply the concept of action sequence prediction, which plays a crucial role in the success of action replication algorithms, to Reinforcement Learning (RL). Based on the observation that incorporating action sequences when predicting the ground truth return-to-go reduces the validation loss, we present Coarse-to-fine Q-Network with Action Sequence (CQN-AS), a novel value-based RL algorithm that trains a critic network that outputs Q-values for action sequences. That is, we train a value function to explicitly learn the outcomes of action sequence executions. Experimental results show that CQN-AS outperforms several baseline algorithms on various sparse reward humanoid control and tabletop manipulation tasks in BiGym and RLBench.