This paper presents a novel approach to addressing security vulnerabilities, particularly jailbreak and prompt injection, that arise when using large-scale language models (LLMs) in production environments. We highlight the limitations of existing fine-tuning and API approaches and introduce Archias, a domain-specific expert model. Archias categorizes user queries into several categories—domain-specific, malicious, price-injected, prompt-injected, and out-of-domain—and integrates these results into the LLM's prompts to generate more appropriate responses. We validate our approach by building a benchmark dataset focused on the automotive industry, and we contribute to the advancement of research by making it publicly available.