This paper raises a central epistemological question: whether AI, particularly large-scale language models (LLMs), generate new knowledge in science or simply reassemble fragments of memory. To answer this question, the authors propose a testable method called "unlearning-as-ablation." This method involves removing a specific result and all relevant information supporting it from the model, and then assessing whether the model can re-derive the result using accepted axioms and tools. Success in re-deriving the result demonstrates generative capabilities beyond memory, while failure demonstrates current limitations. The paper demonstrates the feasibility of this method through minimal pilot studies in mathematics and algorithms, and suggests potential extensions to other fields such as physics and chemistry. This paper is a position paper, focusing on conceptual and methodological contributions rather than empirical results.