This paper addresses the reliability and security issues of large-scale language models (LLMs), which are increasingly used in cybersecurity threat analysis. With over 21,000 vulnerabilities disclosed in 2025 alone, manual analysis is impossible, and scalable and verifiable AI support is crucial. LLMs struggle to address emerging vulnerabilities due to the limitations of their training data. Retrieval-Augmented Generation (RAG) can mitigate these limitations by providing up-to-date information, but it remains unclear how much LLMs rely on the retrieved information and whether it is meaningful and accurate. This uncertainty can mislead security analysts, leading to incorrect patch prioritization and increased security risks. Therefore, this paper proposes LLM Embedding-based Attribution (LEA) to analyze generated responses for vulnerability exploitation analysis. LEA quantifies the relative contributions of internal knowledge and retrieved content in the generated response. Using three state-of-the-art LLMs, we evaluated LEA across three RAG settings (valid, generic, and incorrect) against 500 critical vulnerabilities disclosed between 2016 and 2025. The results demonstrate that LEA can detect clear differences between non-discovery, generic, and valid discovery scenarios with over 95% accuracy on a large-scale model. Finally, we demonstrate the limitations of retrieving incorrect vulnerability information and warn the cybersecurity community against blindly relying on LLM and RAG for vulnerability analysis. LEA provides security analysts with metrics to audit the enhanced RAG workflow, improving the transparent and trustworthy deployment of AI in cybersecurity threat analysis.