Despite the rapid growth of code generation using large-scale language models (LLMs), research on how security vulnerabilities evolve through iterative LLM feedback remains limited. This paper analyzes the security degradation of AI-generated code through a controlled experiment that involved 40 rounds of "improvement" on 400 code samples using four different prompting strategies. The study found a 37.6% increase in critical vulnerabilities after just five iterations, with distinct vulnerability patterns emerging depending on the prompting approach.