This study applies a neuroscientific contrastive localization technique to identify units causally relevant to theory of mind (ToM) and mathematical reasoning tasks in large-scale language models (LLMs) and visual-language models (VLMs). Using contrastive stimulus sets, we localized top-activated units across 11 LLMs and 5 VLMs, ranging from 3 billion to 90 billion parameters, and assessed their causal roles through targeted ablation. We compared the effects of functionally selected units on downstream accuracy in established ToM and mathematics benchmarks to those of low-activation and randomly selected units. Contrary to expectations, low-activation units sometimes resulted in greater performance impairment than high-activation units, and units derived from mathematical localizers often impaired ToM performance more than units derived from ToM localizers. These results question the causal relevance of contrast-based localizers and highlight the need to more accurately capture broader stimulus sets and task-specific units.