This project report summarizes the process of fine-tuning the Stable Diffusion model using the Calvin and Hobbes comics dataset. The goal is to perform style transfer, transforming an arbitrary input image into the Calvin and Hobbes comics style. For efficient fine-tuning, we trained stable-diffusion-v1.5 using Low Rank Adaptation (LoRA), and the diffusion process is handled by a Variational Autoencoder (VAE) in U-net. Considering the training time and input data quality, the results are visually appealing.