This paper addresses the problem of unsupervised domain adaptation, which enables model knowledge transfer across unknown domains. To address the challenge of existing methods, which struggle to balance domain-invariant representations with preserving domain-specific features, we present a novel approach that aligns the relative positions of equivalent concepts in latent space, rather than relying on absolute coordinate alignment. This approach preserves domain-specific features by defining a domain-agnostic structure for semantic/geometric relationships between class labels in language space and inducing the organization of samples in visual space to reflect referential inter-class relationships. We demonstrate excellent performance on domain adaptation tasks across four image and video datasets, achieving average class accuracy improvements of +3.32% on DomainNet, +5.75% on GeoPlaces, +4.77% on GeoImnet, and +1.94% on EgoExo4D.