This paper presents a novel method for monitoring the entropy of convolutional neural networks (CNNs) activations to address their vulnerability to adversarial attacks. Unlike existing adversarial attack detection methods that require model retraining, network architecture modification, or performance degradation on normal inputs, our method detects adversarial inputs by detecting changes in activation entropy without model modification. Experimental results using VGG-16 show that adversarial inputs consistently change the activation entropy by approximately 7% in the early convolutional layers, achieving 90% detection accuracy and keeping false positive and false negative rates below 20%. This result demonstrates that CNNs inherently encode distributional changes in their activation patterns, suggesting that activation entropy alone can be used to assess the reliability of CNNs. Therefore, this study enables the practical deployment of self-diagnostic vision systems that detect adversarial inputs in real time without model degradation.