This paper presents Mismatched Eraser (MEraser), a backdoor-based fingerprinting technique to address concerns about ownership and intellectual property protection in large-scale language models (LLMs). MEraser effectively removes backdoor-based fingerprints while maintaining model performance through a two-stage fine-tuning strategy utilizing mismatched and normal datasets. Through extensive evaluations on various LLM architectures and fingerprinting methods, we demonstrate that MEraser achieves complete fingerprinting while maintaining model performance even with a small training data set of less than 1,000 samples. Furthermore, we introduce a transferable eraser mechanism that enables effective fingerprinting without repetitive training across models. In conclusion, this paper provides a practical solution for fingerprinting in LLMs, exposes vulnerabilities in current fingerprinting techniques, and presents comprehensive evaluation criteria for the development of more robust model protection methods.