This paper demonstrates that certain machine learning unlearning methods are vulnerable to simple prompt attacks. We systematically evaluate eight unlearning techniques across three model families, assessing their ability to retrieve presumably unlearned knowledge through output-based, logit-based, and probe analyses. While methods such as RMU and TAR exhibit robust unlearning, ELM is vulnerable to certain prompt attacks (e.g., adding Hindi filler text to the original prompt recovers 57.3% accuracy). Logit analysis reveals that unlearned models are less likely to hide knowledge through changes in answer format, given the strong correlation between output and logit accuracy. These results challenge conventional assumptions about the effectiveness of unlearning and highlight the need for a reliable evaluation framework that can distinguish genuine knowledge removal from superficial output suppression. To facilitate further research, we present an evaluation framework that facilitates the evaluation of prompting techniques for retrieving unlearned knowledge.