This study, drawing on a usage-based constructivism (UCx) approach, investigates whether the internal representations of large-scale language models (LLMs) reflect feature-rich, hierarchical structures. Using the Pythia-1.4B model, we analyze the representations of English double-object (DO) and preposition-object (PO) phrases. We leverage a dataset of 5,000 sentence pairs in which human-rated preference for DO or PO was systematically varied. Geometric analysis reveals that the separability of two phrase representations, as measured by energy distance or Jensen-Shannon divergence, is systematically modulated by the gradient preference strength. That is, more typical exemplars of each phrase occupy more distinct regions in the activation space, whereas sentences that are equally likely to appear in either phrase do not. These results provide evidence that LLMs learn rich, hierarchical phrase representations and support the geometric measurement approach to LLM representations.