This paper examines the bias of guardrails in generative language models (LLMs). Specifically, we analyze the impact of user background information (age, gender, race, political affiliation, etc.) on the likelihood of LLM requests being rejected, using GPT-3.5. Our findings reveal that young female and Asian American users are more likely to be rejected when requesting prohibited or illegal information, and that guardrails tend to reject requests that contradict a user's political leanings. Furthermore, we find that even innocuous information, such as sports fandom, can infer a user's political leanings and influence guardrail activation.