In this paper, we propose SubZero, a memory-efficient optimization method for fine-tuning large-scale language models (LLMs). Existing zeroth-order optimization methods have a problem that the variance of gradient estimation increases linearly with the dimension of the model, and SubZero solves this problem by using low-dimensional perturbation. SubZero improves training performance while reducing memory consumption, and converges faster than existing zeroth-order optimization methods. Through experimental results, we verify the superiority of SubZero on various language modeling tasks, and we disclose the source code.