LrcSSM is a nonlinear recurrent model that processes long sequences at the speed of conventional linear state-space hierarchies. By restricting the state transition matrix to be diagonal and learning it at each step, we can process the entire sequence in parallel using a single prefix-scan. This achieves time and memory complexity of $\mathcal{O}(TD)$ and sequential depth of $\mathcal{O}(\log T)$ for input sequence length T and state dimension D. It also provides formal gradient stability guarantees, unlike other input variational systems such as Liquid-S4 or Mamba. With forward and backward propagation costs of $\Theta(T D L)$ FLOPs for network depth L, and a low sequential depth and number of parameters of $\Theta(D L)$, it follows the computationally optimal scaling law regime recently observed in Mamba ($\beta \approx 0.42$). It outperforms the quadratic-attention Transformer under the same computational requirements, and avoids the memory overhead of FFT-based long convolutions. On a series of long-term prediction tasks, LrcSSM outperforms LRU, S5, and Mamba.