This paper points out the limitations of existing robust algorithm development, which relies on structural assumptions without empirical verification of specific distributional shifts, and proposes an empirically grounded, data-driven approach. We build an empirical testbed comprising eight tabular datasets, 172 distribution pairs, 45 methods, and 90,000 method configurations to compare and analyze Empirical Risk Minimization (ERM) and Distributionally Robust Optimization (DRO) methodologies. Our analysis reveals that, unlike the X (covariate) shifts typically discussed in the existing ML literature, Y|X shifts are the most common, and that robust algorithms do not outperform conventional methods. A deeper analysis of the DRO methodology reveals that implementation details, such as model class and hyperparameter selection, have a greater impact on performance than uncertainty sets or radii. Finally, we demonstrate through a case study that a data-driven and inductive understanding of distributional shifts can provide a novel approach to algorithm development.