This paper studies Retrieval-Augmented Machine Translation (RAG-MT) using unstructured documents. While previous research has primarily improved the performance of LLMs by retrieving information from pairwise machine translation corpora or knowledge graphs, this paper focuses on leveraging the vast global knowledge available in unstructured documents across various languages. To achieve this, the researchers built a new benchmark, RAGtrans, consisting of 169,000 machine translation samples and multilingual documents, using GPT-4 and human translators. Furthermore, they propose a multi-task learning method that trains LLMs to utilize information from existing multilingual corpora without additional labeling. Experimental results demonstrate that the proposed method significantly improves BLEU and COMET scores for English-Chinese and English-German translations. Finally, we analyze the challenges faced by current LLMs in these tasks.