This paper introduces FairSHAP, a novel preprocessing framework that leverages Shapley values to ensure fairness in machine learning models. FairSHAP uses an interpretable feature importance measure based on Shapley values to identify instances in training data that cause unfairness and systematically corrects them through instance-level matching across sensitive groups. This process improves individual fairness metrics, such as discriminatory risk, while preserving data integrity and model accuracy. FairSHAP significantly improves demographic parity and equality of opportunity across diverse tabular datasets, achieves fairness improvements with minimal data variation, and in some cases, improves predictive performance. FairSHAP is model-independent and transparent, easily integrated into existing machine learning pipelines, and provides actionable insights into the causes of bias.