Systematic evaluation of AI systems is a critical issue, especially as AI technologies enter high-risk areas. To this end, the EU's Artificial Intelligence Act (AI) introduces AI Regulatory Sandboxes (AIRS), providing environments for testing AI systems under the supervision of Competent Authorities (CAs). This aims to balance innovation and regulatory compliance, particularly for startups and small and medium-sized enterprises (SMEs). However, significant challenges remain, including fragmented evaluation methods, a lack of standardization in testing, and a weak feedback loop between developers and regulators. To address these gaps, this paper proposes the Sandbox Configurator, a modular, open-source framework that enables the selection of domain-relevant tests from a shared library and the creation of customized sandbox environments with integrated dashboards. This framework features a plugin architecture that supports both open and proprietary modules and aims to foster a shared ecosystem of interoperable AI evaluation services. The Sandbox Configurator addresses multiple stakeholders: it provides CAs with a structured workflow for enforcing legal obligations, empowers technical experts to integrate robust evaluation methods, and provides AI providers with a transparent path to compliance. By fostering cross-border collaboration and standardization, the Sandbox Configurator aims to support a scalable, innovation-friendly European infrastructure for trustworthy AI governance.