This paper addresses the serious privacy and security risks of fingerprinting attacks on large language models (LLMs), which are increasingly used in sensitive environments. We present research on LLM fingerprinting from both an offensive and defensive perspective. An offensive methodology that automatically optimizes query selection using reinforcement learning achieves better fingerprinting accuracy with just three queries than randomly selecting three queries from the same pool. The defensive approach uses semantic-preserving output filtering via auxiliary LLMs to hide model identity while maintaining semantic integrity. The defensive approach reduces fingerprinting accuracy for the tested models while maintaining output quality. These contributions demonstrate the potential to enhance the functionality of fingerprinting tools while providing practical mitigation strategies against fingerprinting attacks.