This paper presents an evaluation of the effectiveness of soft token attacks (STAs) used in machine unlearning of large-scale language models (LLMs). While previous research has demonstrated that STAs can successfully extract unlearned information, this study demonstrates that, in a robust audit environment, STAs can extract any information from LLMs, regardless of whether the information was included in the unlearning algorithm or the original training data. Using benchmarks such as Who Is Harry Potter? and TOFU, we demonstrate this, revealing that even a small number of soft tokens (1-10) can leak an arbitrary string of more than 400 characters. Therefore, we emphasize the need for a cautious approach to effectively deploy STAs in unlearning audits.