This paper is a Systematization of Knowledge (SoK) paper that analyzes guardrails, a defense mechanism against jailbreak attacks that bypass the safety alignment of large-scale language models (LLMs). To improve the fragmented state of LLM guardrails, we present a multidimensional taxonomy with six dimensions and a Security-Efficiency-Utility evaluation framework. Through extensive analysis and experiments, we identify the strengths and weaknesses of existing guardrails and explore optimization of defense mechanisms and their universality across attack types.