While Transformer-based models excel at predicting everyday patterns, questions remain as to whether they internalize semantic concepts, such as market conditions, or simply fit curves. Furthermore, questions arise about whether these internal representations can be leveraged to simulate rare and risky events, such as market crashes. To address these issues, this paper introduces a causal intervention technique called activation transplantation. This technique manipulates the hidden state by applying statistical moments from one event (e.g., a past crash) to another event (e.g., a period of calm) during a forward pass. This procedure deterministically controls the prediction: injecting crash semantics induces a downward prediction, while injecting calm semantics suppresses the crash and restores stability. Beyond binary control, we find that the model encodes a notion of event severity, and that the latent vector norm is directly correlated with the magnitude of the system shock. Validated on two architectures (Toto and Chronos), the technique demonstrates that manipulable and semantically informed representations are powerful properties for large-scale time series transformers.