This paper introduces two open-weighted large-scale language models (LLMs), ACCeLLiuM, specifically tuned to generate OpenACC directives for GPU programming. ACCeLLiuM is designed to generate expert-quality OpenACC directives for data-parallel loops, and is accompanied by a supervised learning fine-tuning dataset used to train it. The ACCeLLiuM SFT dataset consists of 4,033 OpenACC pragma-loop pairs extracted from the public GitHub C/C++ repository, and the models were trained and tested on this dataset. Experimental results show a significant performance difference in OpenACC pragma generation between the baseline LLMs and the fine-tuned ACCeLLiuM versions.