This paper addresses the recent growing interest in average-reward formulations for reinforcement learning (RL) that can solve long-term problems without discounting. In discounted settings, entropy-regulatory algorithms have been developed, demonstrating superior performance over deterministic methods. However, deep RL algorithms targeting entropy-regulatory average-reward objectives have not been developed. To address this gap, this paper proposes an average-reward soft actor-critic algorithm. We validate our method by comparing it with existing average-reward algorithms on standard RL benchmarks, achieving superior performance for the average-reward criterion.