While LLM-based passage expansion is effective in improving initial retrieval performance, it often degrades in dense retrieval systems due to semantic bias and mismatch with the pre-trained semantic space. Furthermore, only a portion of a passage is relevant to the query, while the remainder introduces noise, and chunking techniques break co-reference continuity. In this paper, we propose Coreference-Linked Augmentation for Passage Retrieval (CLAP), a lightweight LLM-based augmentation framework that partitions passages into coherent chunks, resolves co-reference chains, and generates local pseudo-queries aligned with the dense searcher representation. By simply fusion of global topic signals with fine-grained subtopic signals, we achieve robust performance across a variety of domains. CLAP consistently improves performance as the searcher's performance increases, enabling the dense searcher to achieve performance comparable to or exceeding that of two-stage rankers such as BM25 + MonoT5-3B (up to 20.68% absolute improvement in nDCG@10). These improvements are particularly noticeable in domain-free settings, where existing LLM-based extension methods relying on domain knowledge often fail. CLAP instead adopts a logic-driven pipeline, enabling robust, domain-independent generalization.