This paper highlights the significant changes in the meaning and value of data in the era of generative artificial intelligence (AI), and points out the inadequacy of existing data protection concepts. As data plays a critical role throughout the AI lifecycle, data protection is required at various stages, including training data, prompts, and outputs. Accordingly, we present a four-level taxonomy of non-usability, privacy, traceability, and erasability to capture the diverse protection requirements that arise in modern generative AI models and systems. This framework provides a structural understanding of the tradeoffs between data usability and control across the entire AI pipeline, including training datasets, model weights, system prompts, and AI-generated content, and analyzes representative technical approaches at each level and identifies regulatory blind spots. Ultimately, it provides a structural framework for aligning future AI technologies and governance with trustworthy data practices, providing timely guidance to developers, researchers, and regulators.