This paper presents the results of an evaluation of 11 open-source large-scale language models (LLMs) to improve the German manual tumor registration system. LLMs with 1 to 70 billion parameters were used to evaluate their performance on three basic tasks: tumor diagnosis identification, ICD-10 code assignment, and date of first diagnosis extraction. Using an annotated dataset generated from anonymized urologist notes, the models' performance was analyzed using several prompting strategies. Llama 3.1 8B, Mistral 7B, and Mistral NeMo 12B models performed best, while models with fewer than 7 billion parameters showed significantly lower performance. Prompting with data from non-urological medical fields significantly improved performance, suggesting that open-source LLMs hold significant potential for automating tumor registration. We conclude that models with 7 to 12 billion parameters offer the optimal balance of performance and resource efficiency. The evaluation code and dataset are publicly available.