This paper addresses the need for modern list-based recommender systems to consider both long-term user perception and short-term attention changes, and proposes a novel framework, mccHRL, which leverages hierarchical reinforcement learning. mccHRL provides multiple levels of temporal abstraction for the list-based recommendation problem. The upper agent studies the evolution of user perception, and the lower agent generates an item selection policy by modeling it as a sequential decision problem. We claim that this framework provides a clear decomposition of the inter-session context (upper agent) and the intra-session context (lower agent), and demonstrates improved performance over several baseline models through experiments in a simulator-based environment and on industrial datasets. The data and code are open source.