This paper presents Direct Preference Optimization (DPO) as a bridge between two major theories of preference learning in machine learning (ML): the loss function (Savage) and probabilistic selection (Doignon-Falmagne and Machina). This bridge is established for all Savage loss functions, and at this general level, it provides (i) support for abstention in the choice theory, (ii) support for nonconvex objectives in the ML context, and (iii) the ability to frame notable extensions of the DPO setting for free, including margin and length modifications. Given the diverse application areas and current interest in DPO, and the fact that many of the state-of-the-art DPO variants occupy a small portion of the scope of this paper, it is important to understand how DPO works from a general principles perspective. Furthermore, it helps to understand pitfalls and identify solutions that fall outside this scope.