This paper presents the results of a comparative analysis of long context history management strategies in a large-scale language model (LLM)-based software engineering (SWE) agent. We compared existing LLM-based summarization methods, such as OpenHands and Cursor, with observation-masking, a method that simply ignores previous observations, using various model configurations on the SWE-bench Verified dataset. We found that the observation-masking strategy achieved similar or slightly higher problem-solving rates than LLM-based summarization methods, while reducing the cost by half. For example, on the Qwen3-Coder 480B model, observation-masking improved the problem-solving rate from 53.8% to 54.8%, achieving similar performance to LLM summarization at a lower cost. This study suggests that, at least in SWE-agent and SWE-bench Verified environments, the most effective and efficient context management may be the simplest approach. For reproducibility, we make the code and data available.