This paper proposes CTourLLM, a large-scale language model (LLM) specialized in Chinese cultural tourism. To address the lack of tourism knowledge in existing LLMs, we build a new dataset called Cultour, which consists of a tourism knowledge database, travelogue data, and tourism QA data. Using this dataset, we fine-tune a Qwen-based model using supervised learning. To evaluate the performance of CTourLLM, we propose a new evaluation metric called Relevance, Readability, and Availability (RRA), and perform both automated and human evaluations. Experimental results show that CTourLLM outperforms ChatGPT by 1.21 on the BLEU-1 scale and 1.54 on the Rouge-L scale. The Cultour dataset is publicly available.