This paper presents a novel offline reinforcement learning framework by introducing symmetric divergence to behavioral regulation policy optimization (BRPO). Existing methods have focused on asymmetric divergence, such as KL, to obtain analytic regularization policies and practical minimization objectives. This paper shows that symmetric divergence does not allow for analytic regularization policies as a regularization strategy and can lead to numerical problems as a loss. To address these problems, we utilize the Taylor series of $f$-divergence. Specifically, we demonstrate that analytic policies can be obtained through a finite series. For the loss, symmetric divergence can be decomposed into an asymmetric term and a conditionally symmetric term, and the latter is Taylor-expanded to alleviate the numerical problems. Consequently, we propose Symmetric $f$ Actor-Critic (S$f$-AC), the first practical BRPO algorithm utilizing symmetric divergence. Distributional approximation and MuJoCo experimental results confirm that S$f$-AC achieves competitive performance.