GeoSAM2 is a prompt-controlled framework for part segmentation of textureless 3D objects. It renders normal and point maps from a predefined viewpoint and uses simple 2D prompts (clicks or boxes) to guide part selection. A shared SAM2 backbone, augmented with LoRA and residual geometry fusion, processes the prompts, enabling view-specific inference while preserving pretrained prior information. Predicted masks are backprojected onto the object and aggregated across views. This method enables fine-grained part-specific control without text prompts, shape-specific optimization, or full 3D labels. Unlike global clustering or scale-based methods, the prompts are explicit, spatially grounded, and interpretable. It achieves state-of-the-art class-independent performance on PartObjaverse-Tiny and PartNetE, outperforming both slow optimization-based pipelines and fast but crude feed-forward approaches. This highlights a new paradigm for 3D segmentation that leverages interactive 2D inputs to increase controllability and precision in object-level part understanding, aligning with the paradigm of SAM2.