With the rapid development of generative AI and synthetic media, it has become increasingly important to distinguish AI-generated images from real ones. In this paper, we propose a new watermarking method, IConMark, to overcome the limitations of existing weak watermarking techniques. IConMark embeds interpretable concepts into AI-generated images, which makes it human-interpretable and robust against adversarial manipulation, unlike existing noise or perturbation methods. It is robust against various image augmentation and allows humans to manually verify the watermark. IConMark maintains high detection accuracy and image quality, and can be combined with existing watermarking techniques (StegaStamp, TrustMark) to further enhance the robustness as IConMark+SS and IConMark+TM. Experimental results show that IConMark and its variants have AUROC scores that are 10.8%, 14.5%, and 15.9% higher than those of the existing best-performing techniques, respectively.