In this paper, we propose a novel method, Density-Aware Safety Perception (DASP), to solve the state distribution shift problem in offline reinforcement learning. DASP encourages agents to prioritize actions that lead to outcomes with high data density, and to return to or within the (safe) region of the distribution. To this end, we optimize the objective function within a variational framework that simultaneously considers the potential outcomes of a decision and their density, providing important context information for safe decision making. We verify the effectiveness and feasibility of the proposed method through extensive experiments in MuJoCo and AntMaze offline environments.