This paper proposes "Permissioned LLMs (PermLLM)," a new LLM class that enforces organizational data access control structures in query responses to address the challenges that arise when large-scale language models (LLMs) trained on isolated and isolated organizational data in enterprise environments serve users with diverse access privileges. We present abstractions to demonstrate the correct enforcement of access control in PermLLM, relevant response concepts, and a new metric, access advantage, to evaluate the effectiveness of the PermLLM mechanism. Furthermore, we introduce three new PermLLM mechanisms based on parameter efficient fine-tuning and present two implementations of access advantage: the Domain Distinguishability Index (DDI) based on membership inference attacks and the Utility Gap Index (UGI) based on LLM utility assessment. We extensively experiment with the effectiveness of the PermLLM mechanism and evaluate the effectiveness of the DDI and UGI metrics using five publicly available datasets: GPQA, RCV1, SimpleQA, WMDP, and PubMedQA.