This paper presents a novel approach for citation recommendation (LCR), specifically utilizing a generative approach to perform citation-specific pretraining within an encoder-decoder architecture. Two variants are proposed, which learn reconstructions by masking author-date citation tokens. The first, CiteBART-Base, utilizes only local context, and the second, CiteBART-Global, enhances the training signal by adding titles and abstracts of citing articles. CiteBART-Global achieves state-of-the-art performance on most LCR benchmarks, and the trained model performs best on the Refseer benchmark. This paper also provides detailed statistics on the generalization ability and hallucination tendency of CiteBART-Global through various experiments and analyses.