This paper presents the results of a study on replay attacks, a security threat to large-scale language models (LLMs), focusing on attacks that exploit the user-controlled response prefill feature, rather than the prompt-level attacks primarily addressed in previous studies. Prefill allows attackers to manipulate the beginning of the model output, shifting the attack paradigm from persuasion-based attacks to direct manipulation of model state. Black-box security analysis was performed on 14 LLMs to classify replay attacks at the prefill level and evaluate their effectiveness. Experimental results show that attacks using adaptive methods achieved success rates exceeding 99% across multiple models, and token-level probability analysis confirmed that initial state manipulation caused a shift in the first token probability from rejection to cooperation. Furthermore, we demonstrate that replay attacks at the prefill level effectively enhance the success rate of existing prompt-level attacks by 10-15 percentage points. Evaluation of several defense strategies revealed that existing content filters offer limited protection, and that detection methods focusing on the operational relationship between prompts and prefill are more effective. In conclusion, we expose vulnerabilities in the current LLM security alignment and emphasize the need to address pre-fill attack surfaces in future security training.