ObjectGS is an object recognition framework that integrates semantic understanding for object-level recognition while maintaining the high-quality reconstruction and real-time new view synthesis capabilities of 3D Gaussian Splatting. Unlike conventional 3D Gaussian Splatting that processes the scene as a whole, ObjectGS models individual objects as local anchors to generate neural Gaussians and share object IDs to enable accurate object-level reconstruction. During training, these anchors are dynamically generated or removed, features are optimized, and explicit semantic constraints are enforced via one-hot ID encoding and classification losses. Experimental results show that ObjectGS outperforms state-of-the-art methods on open-vocabulary and full-view segmentation tasks, and integrates seamlessly with applications such as mesh extraction and scene editing.