This paper points out that the discussion on artificial general intelligence (AGI) is full of exaggeration and speculation, like a Rorschach test, and argues that only long-term scientific investigation can resolve the debate on AGI. It defines intelligence as adaptive ability and defines AGI as an artificial scientist, and explains two basic tools used to build adaptive systems: exploration and approximation based on Sutton's Bitter Lesson. It compares and analyzes the strengths and weaknesses of various systems such as o3, AlphaGo, AERA, NARS, and Hyperon, as well as hybrid architectures, and classifies meta-approaches for building AGI into three categories: scale-maxing, simp-maxing, and weak constraint maximization (w-maxing), and presents examples such as AIXI, the free energy principle, and embiggening of language models. In conclusion, it argues that scale-maximization-based approximation is dominant, but AGI will be achieved by a fusion of various tools and meta-approaches, and points out that sample and energy efficiency are bottlenecks in AGI development due to current hardware improvements.