This paper presents a study of how and where personas, defined as sets of unique human traits, values, and beliefs, are encoded in the representation space of large-scale language models (LLMs). Using various dimensionality reduction and pattern recognition methods, we first identify model layers that exhibit the greatest variation in the encoding of these representations. We then analyze activations within these selected layers to examine how specific personas are encoded relative to other personas, including shared and independent embedding spaces. We find that personas analyzed across multiple pre-trained decoder-only LLMs exhibit significant differences in the representation space only within the last third of the decoder layer. Overlapping activations are observed for specific ethical perspectives, such as moral nihilism and utilitarianism, suggesting ambiguity. In contrast, political ideologies, such as conservatism and liberalism, appear to be represented in more distinct regions. These findings enhance our understanding of how LLMs internally represent information and can inform future efforts to improve the modulation of specific human traits in LLM output. Caution: This paper contains potentially offensive sample sentences.