This paper presents a study that leverages large-scale language models (LLMs) and search-augmented generation (RAG) techniques to address the challenges of long, noisy, and redundant texts in electronic health records (EHRs). To address the limited context window of existing LLMs, we use RAG to retrieve task-relevant passages from the entire EHR and apply it to three clinical tasks: imaging procedure extraction, antibiotic schedule generation, and major diagnosis identification. Using real-world inpatient EHR data, we evaluate three state-of-the-art LLMs with varying amounts of context. We demonstrate that RAG performs similarly or better than methods using only recent records, achieving comparable performance to full context with significantly fewer input tokens. This suggests that RAG remains a competitive and efficient approach even as new models capable of handling longer texts emerge.