This paper presents TINYFABULIST TRANSLATION FRAMEWORK (TF2), an integrated framework for literary translation in the low-resource language Romanian. TF2 is an integrated framework for dataset generation, fine-tuning, and evaluation centered on generating and releasing a compressed fine-tuned language model (TF2-12B) and large-scale synthetic parallel datasets (DS-TF2-EN-RO-3M and DS-TF2-EN-RO-15K). Based on an existing large-scale synthetic English fable dataset (DS-TF1-EN-3M), we generate 15,000 high-quality Romanian reference data items and fine-tune the model using directive fine-tuning and adapter compression on a 12 billion-parameter open-weighted model. Evaluation is performed by combining corpus-level BLEU and a five-dimensional LLM-based evaluation metric (accuracy, fluency, coherence, style, and cultural adaptation). Experimental results show that the fine-tuned model achieves fluency and relevance comparable to the best-performing large-scale proprietary models, while remaining open-source, accessible, and cost-effective. The model, dataset, script, and evaluation prompts are all publicly available.