This paper explores model diffing, which studies how fine-tuning alters model representations and internal algorithms. Specifically, we use a model diffing method called crosscoder to track concept changes between a base model and a fine-tuned model. We analyze the shortcomings of existing crosscoders and propose Latent Scaling and BatchTopK loss to improve them. Experiments demonstrate that the BatchTopK crosscoder identifies more accurate and interpretable concepts, and is particularly effective at identifying chatbot-related concepts such as $\textit{false information}$ and $\textit{personal question}$, as well as rejection-related concepts.