To enhance the interpretability of large-scale language models (LLMs), we applied sparse autoencoders (SAEs) to a GPT style transfer model trained on Jane Austen's novels. We analyzed the structure, themes, and biases within the model representation and training data. We discovered interpretable features reflecting core narratives and concepts such as gender, class, and social obligations.