This paper focuses on the security threats to large-scale language models (LLMs), which play a crucial role in modern IT environments dominated by AI solutions, and addresses issues that may hinder the reliable adoption of LLMs in critical applications such as government agencies and healthcare institutions. To counter the sophisticated censorship mechanisms implemented in commercial LLMs, the authors study the threat of LLM jailbreaking and, by comparing and analyzing the behavior of censored and uncensored models using an explainable AI (XAI) solution, derive unique, exploitable alignment patterns. Based on this, the authors propose a novel jailbreaking attack, XBreaking, that exploits these patterns to break the security constraints of LLMs. Experimental results provide important insights into the censorship mechanism and demonstrate the effectiveness and performance of the proposed attack.