This paper presents a concise derivation of several influential score-based diffusion models, drawing on only a few textbook-level results. Diffusion models have recently emerged as a powerful tool for generating realistic synthetic signals (especially natural images), and play a prominent role in state-of-the-art algorithms for inverse problems in image processing. While these algorithms are often surprisingly simple, the theory behind them is not, and there are several complex theoretical justifications in the literature. In this paper, we provide a simple and largely self-explanatory theoretical justification for score-based diffusion models for signal processing. This approach leads to a general algorithmic template for training and generating samples using diffusion models. We show that several influential diffusion models correspond to specific choices within this template, and that simpler alternative algorithmic choices can provide similar results. This approach has the additional advantage of enabling conditional sampling without any probability approximation.