This paper addresses concerns about the tendency of large-scale language models (LLMs) to encode and reproduce political and economic ideological biases. We present a framework for investigating and mitigating these biases in decoder-based LLMs, using contrastive pairs that extract and compare hidden layer activations from models like Mistral and DeepSec, based on the Political Compass Test (PCT). We introduce a comprehensive activation extraction pipeline capable of layer-by-layer analysis across multiple ideological axes, revealing meaningful differences in political framing. Consequently, we demonstrate that decoder LLMs systematically encode representational biases across layers, which can be leveraged for effective steering vector-based mitigation. Beyond superficial output interventions, we present a principled approach to debiasing, providing new insights into how political biases are encoded in LLMs.