This paper presents Clean-Label Physical Backdoor Attack (CLPBA), a novel approach to physical backdoor attacks that leverages physical objects as triggers, unlike conventional digital trigger-based backdoor attacks. While conventional physical backdoor attacks require label manipulation, resulting in low stealth, CLPBA introduces a backdoor into the model by applying subtle perturbations without label changes. Framing the problem as a dataset distillation, we propose three variations—Parameter Matching, Gradient Matching, and Feature Matching—to generate effective toxicity data in both linear search and fully fine-tuned learning environments. We experimentally demonstrate the effectiveness of CLPBA on two collected physical backdoor datasets for face recognition and animal classification, demonstrating its superior performance over conventional dirty-label attacks, particularly in challenging scenarios where backdoor generalization in the physical world is required.