This paper introduces a document-level parallel corpus called OpenWHO to address the lack of evaluation datasets for low-resource languages in machine translation (MT), particularly in the healthcare field. This corpus consists of expert-authored and professionally translated materials available on the World Health Organization (WHO) e-learning platform. It contains 2,978 documents and 26,824 sentences, supporting over 20 languages, nine of which are low-resource languages. Using this new resource, we evaluated state-of-the-art large-scale language models (LLMs) and traditional MT models. Our results show that LLMs consistently outperform traditional MT models, with Gemini 2.5 Flash achieving a 4.79 ChrF point improvement over NLLB-54B on the low-resource test set. Furthermore, we investigated the impact of LLM contextualization on accuracy, demonstrating the significant benefits of document-level translation in specialized fields such as healthcare. The OpenWHO corpus was made available to encourage low-resource MT research in the healthcare field.