This paper proposes the Single-Shot Decomposition Network (SSDnet) to address the problem of image outlier detection in zero-shot settings. Unlike existing methods, SSDnet detects outliers using only test images, without training data or reference samples. Inspired by the Deep Image Prior (DIP), it assumes that natural images have unified textures and patterns, and that outliers appear as local deviations from these repetitive or probabilistic patterns. Using a patch-based learning framework, the input image is directly fed to the network for self-reconstruction. Masking, patch shuffling, and small Gaussian noise are applied to avoid simple identity mapping. Furthermore, an inner similarity-based perceptual loss is employed to capture structure beyond pixel accuracy. It achieves state-of-the-art performance on the MVTec-AD and Fabric datasets.