Global geolocation involves determining the precise geographic location of globally captured images using geographic cues such as climate, landmarks, and architectural features. Despite advances in geolocation models like GeoCLIP, the interpretability of these models has not been fully explored. Existing concept-based interpretability methods do not effectively align with the goal of geoaligned image-to-location embeddings, resulting in suboptimal interpretability and performance. To address this gap, this paper proposes a novel framework that integrates global geolocation and concept bottlenecks. The proposed method jointly projects image and location embeddings onto a shared bank of geographic concepts (e.g., tropical climate, mountains, cathedrals) and inserts a concept-aware alignment module that minimizes concept-level loss. This enhances alignment in concept-specific subspaces and enables robust interpretability. This is the first study to introduce interpretability into geolocation. Extensive experiments demonstrate that the proposed approach outperforms GeoCLIP in geolocation accuracy and improves performance across a variety of geospatial prediction tasks, providing richer semantic insights into geographic decision-making processes.