This paper argues that understanding the behavior of large-scale language models (LLMs) is crucial for their safe and reliable use. However, existing explainable AI (XAI) methods primarily rely on word-level explanations, which are computationally inefficient and incompatible with human reasoning. Furthermore, we address the issue of treating explanations as one-off outputs, overlooking the interactive and iterative nature of explanations. In response, we present LLM Analyzer, an interactive visualization system that enables intuitive and efficient exploration of LLM behavior through counterfactual analysis. LLM Analyzer features a novel algorithm that generates fluent and semantically meaningful counterfactuals through goal-directed elimination and substitution operations at a user-defined level of granularity. These counterfactuals are used to compute feature attribution scores and are integrated with concrete examples in table-based visualizations to support dynamic analysis of model behavior. User studies and expert interviews with LLM experts demonstrate the system's usability and effectiveness, emphasizing the importance of involving humans in the explanation process as active participants, rather than passive recipients.