This paper presents the first comprehensive study of three key decisions for effective deployment of Retrieval-Augmented Generation (RAG): whether to deploy RAG, how much information to retrieve, and how to integrate the retrieved knowledge. Through systematic experiments on three LLMs and six datasets, we find that RAG deployment should be selective, that the optimal retrieval amount varies across tasks (5-10 documents for QA, while code generation requires scenario-specific optimization), and that knowledge integration effectiveness varies across task and model characteristics (code generation benefits greatly from prompting, while question answering sees only a small improvement). Therefore, we argue that a general-purpose RAG strategy is inadequate, and that context-aware design decisions that take into account task characteristics and model capabilities are needed.