This paper addresses the dual-use dilemma faced by AI safety systems, namely, the difficulty in judging the benignity of a request, which can lead to unjustified denials or permissions, compromising both usability and safety. While existing systems fail to address this issue due to their lack of access to real-world contextual information, this paper proposes an access control-based conceptual framework that ensures that only verified users can access dual-use outputs. We describe the components of this framework, analyze its feasibility, and explain how it addresses both over-denials and under-denials. Although this is a high-level proposal, the significance of the paper is that better tools for managing dual-use content could allow model providers to provide more functionality to users without sacrificing safety, and provide regulators with new options for more targeted policies.