To address the high cost of large-scale language model (LLM)-based software engineering (SWE) agents due to their long context histories, this paper compares and analyzes existing LLM-based summarization methods with a simple observation masking strategy. Experiments using five different model configurations demonstrate that the observation masking strategy halves the cost while maintaining a success rate similar to or slightly higher than the LLM summarization method. For example, in the Qwen3-Coder 480B model, observation masking improved the success rate from 53.8% to 54.8%. This suggests that the simplest approach may be the most effective and efficient way to manage context in SWE agents. For reproducibility, the code and data are made public.