This paper argues that existing concepts of data protection have become inadequate due to the significant shift in the meaning and value of data in the era of generative AI. The critical role data plays throughout the AI lifecycle highlights the need to protect diverse forms of data, including training data, prompts, and outputs. To address this, this paper proposes a taxonomy comprised of four levels—unusability, privacy, traceability, and erasure—to capture the diverse data protection needs of modern generative AI models and systems. This framework facilitates a structural understanding of the tradeoffs between data usability and control across the entire AI pipeline, including training datasets, model weights, system prompts, and AI-generated content. It also analyzes representative technical approaches at each level and identifies regulatory blind spots that expose critical assets. Ultimately, this paper provides a structural framework for aligning future AI technologies and governance with trustworthy data practices, providing timely guidance to developers, researchers, and regulators alike.