This paper explores role-based access control for large-scale language models (LLMs), which are increasingly deployed in enterprise environments. Existing security mechanisms assume general access permissions and focus on preventing harmful or malicious output, but do not address role-specific access restrictions. This study investigates how fine-tuning LLMs can generate responses that reflect the access permissions associated with various organizational roles. We explore three modeling strategies—a BERT-based classifier, an LLM-based classifier, and role-conditional generation—and evaluate model performance using two datasets: one based on clustering and role-labeling of an existing instruction-tuning corpus, and the other synthetically generated based on realistic role-sensitive enterprise scenarios. We also analyze model performance across various organizational structures and its robustness to prompt insertion, role mismatches, and jailbreak attempts.