This paper highlights the vulnerabilities of fingerprinting techniques for protecting the intellectual property (IP) of large-scale language models (LLMs) and proposes a novel fingerprinting method, Implicit Fingerprints (ImF). Existing fingerprinting techniques insert identifiable patterns with weak semantic consistency, which deviate from natural question-answering (QA) behavior and are vulnerable to detection and removal. This paper demonstrates the vulnerabilities of existing methods using a novel adversarial attack technique, Generation Revision Intervention (GRI). ImF overcomes the limitations of existing methods and improves stealth and robustness by leveraging steganography and Chain-of-Thought (CoT) prompting to generate semantically consistent and natural QA pairs. We evaluate the performance of ImF on 15 different LLMs.