In this paper, we propose VesselSAM, an improved version of Segment Anything Model (SAM) for aortic vessel segmentation. VesselSAM improves its performance by introducing AtrousLoRA module, which integrates Atrous Attention and Low-Rank Adaptation (LoRA). Atrous Attention captures multi-scale contextual information to preserve both fine-grained local information and broad global information, and LoRA enables efficient fine-tuning of the frozen SAM image encoder, which reduces the number of learnable parameters and improves computational efficiency. Evaluation results using Aortic Vessel Tree (AVT) and Type-B Aortic Dissection (TBAD) datasets show that VesselSAM achieves state-of-the-art performance (DSC score above 93%) while significantly reducing the computational overhead compared to existing large-scale models.