We investigate whether the Huginn-3.5B, a depth-recurrent Transformer model, exhibits an interpretable latent Chain-of-Thought (CoT) inference structure. We examine the model's internal workings for arithmetic operations using various probing techniques, including Logit Lens and Coda Lens. By tracing the rank trajectories of final and intermediate result tokens, we find limited evidence of an interpretable latent CoT. Furthermore, we demonstrate that significant probing inconsistencies exist between recursion blocks, and that the interpretability of the hidden state varies significantly depending on the layer index and decoding method. We empirically demonstrate that increasing recursion depth yields only marginal benefits, falling short of models that explicitly externalize the inference step.