This paper addresses the problem of data retrieval in few-shot imitation learning. Existing methods use a single-feature distance heuristic to retrieve data, assuming that the best demonstration is the one most similar to the target demonstration in visual, semantic, or action space. However, this approach captures only a portion of relevant information and can introduce harmful demonstrations, such as retrieving data from unrelated tasks due to similar scene layouts or selecting similar actions in tasks with different targets. In this paper, we present a method called Collective Data Aggregation (COLLAGE) for few-shot imitation learning that uses an adaptive late fusion mechanism to guide the selection of relevant demonstrations based on a task-specific combination of multiple cues. COLLAGE uses a single feature (e.g., appearance, shape, or language similarity) to weight preselected subsets of the dataset, assigning weights based on how well a policy trained on each subset predicts the task in the target demonstration. These weights are then used during policy training to perform importance sampling, sampling data more densely or sparsely based on estimated relevance. COLLAGE is general and feature-agnostic, allowing it to combine any number of subsets selected by any search heuristic and identify which subsets provide the greatest benefit for the target task. In extensive experiments, COLLAGE outperforms state-of-the-art search and multi-task learning methods by 5.1% across ten simulation tasks and by 16.6% on six real-world search tasks on the large-scale DROID dataset.