This paper studies dataset auditing techniques to address privacy and copyright issues stemming from the lack of transparency in datasets used in deep learning model training. We analyze the vulnerabilities of existing dataset auditing techniques to adversarial attacks and propose a new classification system that categorizes them into internal feature (IF) and external feature (EF)-based methods. Furthermore, we define two major attack types: evasion attacks, which conceal dataset usage, and forgery attacks, which falsely claim unused datasets. We propose systematic attack strategies for each type (separation, removal, and detection for evasion attacks; adversarial example-based methods for forgery attacks). Finally, we present a new benchmark, DATABench, comprised of 17 evasion attacks, five forgery attacks, and nine representative auditing techniques. Our evaluation results demonstrate that existing auditing techniques are not sufficiently robust or discriminatory in adversarial environments.