This paper focuses on physically based adversarial examples generated by physical defects within the camera of autonomous vehicles. Two real-world experiments demonstrate that glass breakage induces errors in neural network-based object detection models. A simulation-based study exploiting the physical processes of glass breakage generates realistic physically based adversarial examples. A finite element model (FEM)-based approach is used to generate surface cracks in camera images by applying a stress field defined by particles within a triangular mesh. Physically Based Rendering (PBR) techniques are used to realistically visualize these physically plausible defects. The simulated broken glass effect is applied as an image filter to open-source datasets such as KITTI and BDD100K, and the safety implications for object detection neural networks such as YOLOv8, Faster R-CNN, and Pyramid Vision Transformers are analyzed. Furthermore, the Kullback-Leibler (KL) divergence is calculated between various datasets (our own footage, KITTI, and the Kaggle cat and dog datasets) to investigate the distributional impact of visual distortion. KL divergence analysis results show that the broken glass filter does not cause significant changes in the data distribution.