To address the problem of object detectors being vulnerable to physically feasible attacks (e.g., adversarial patches and textures), this paper proposes a unified adversarial training method, Patch-Based Composite Adversarial Training (PBCAT). PBCAT optimizes the model by combining small-area gradient-based adversarial patches and fine-grained global adversarial perturbations that cover the entire image. Unlike previous studies that only focused on defending against adversarial patch attacks, PBCAT aims to defend against various physically feasible attacks. Experimental results show that PBCAT significantly improves the robustness against various physical attacks compared to existing state-of-the-art defense methods, and in particular, improves the detection accuracy by 29.7% against a recently proposed texture adversarial attack.