Large-scale language models have simplified the generation of personalized translations that reflect predefined stylistic constraints, but they still present challenges when style requirements are implicitly expressed in a set of examples, such as texts produced by specific human translators. This study focuses on the challenging domain of literary translation, exploring various strategies for personalizing automatically generated translations when a small number of examples are available. We first determine how style information is encoded within the model representation and determine the feasibility of the task. We then evaluate various prompting strategies and inference-time interventions to guide model generation toward personalized style, focusing in particular on contrastive steering using sparse autoencoder (SAE) latent variables to identify important personalized attributes. We demonstrate that contrastive SAE steering provides robust style conditioning and translation quality while achieving higher inference-time computational efficiency than prompting methods. Furthermore, we investigate the impact of steering on model activation, demonstrating that the layers encoding personalized attributes are similarly affected by prompting and SAE steering, suggesting similar mechanisms at play.