With the growing popularity of LLM agents and RAGs, retrieving documents essential to solving a task, even when the connection to the task is indirect or implicit, is becoming increasingly important. Addressing this challenge requires fine-grained inference to accurately assess the relevance between the task and each candidate document. However, this capability poses significant challenges to existing IR techniques. In this paper, we propose Retro, a novel approach for inference-based document retrieval. Retro introduces a rubric-based relevance scoring mechanism that allows the model to infer the relationships between tasks and documents based on explicitly defined criteria, generating fine-grained and interpretable relevance scores. Furthermore, Retro combines multiple inference trajectories through score aggregation, supporting test time extension and generating more reliable relevance estimates. To optimize Retro's inference capabilities, we introduce a novel reinforcement learning algorithm tailored to the relevance scoring mechanism that utilizes two compound rewards to maximize the utilization of the trajectories of each training sample. Experimental results show that Retro* outperforms existing document retrieval methods on the BRIGHT benchmark, achieving state-of-the-art performance.