This paper presents Babel, a novel framework for improving style preservation in neural machine translation (NMT). Unlike existing style-preserving approaches that require parallel corpora, Babel uses only a single-language corpus. Babel consists of two main components: a style detector that identifies style inconsistencies based on contextual embeddings, and a diffusion-based style applier that corrects style inconsistencies while maintaining semantic integrity. It can be integrated as a postprocessing module into existing NMT systems, enabling style-aware translation without any architectural changes or parallel style data. Extensive experiments on five different domains (law, literature, scientific papers, medicine, and educational content) show that Babel identifies style inconsistencies with 88.21% precision and improves style preservation by 150% while maintaining a high semantic similarity score of 0.92. Human evaluations also confirm that Babel-improved translations better preserve the style of the original text while maintaining fluency and appropriateness.