This paper proposes a novel attention mechanism, called Adaptive Filter Attention (AFA). AFA directly integrates a learnable dynamic model into the calculation of attention weights. Instead of directly comparing queries and keys, it models the input sequence as discrete observations of a linear stochastic differential equation (SDE). Simultaneously, by applying a linear dynamic model with a diagonalizable state matrix and noise covariance, it efficiently propagates dynamic mutual uncertainty using the closed-form solution of the differential Lyapunov equation. Attention naturally emerges as a maximum likelihood solution to this linear SDE, and the attention weights correspond to robust residual reweighting based on the propagated mutual precision. Imposing additional constraints on the eigenvalues of the state matrix yields a simplified variant with the same computational and memory complexity as standard attention. By employing a small-angle approximation and limiting the disappearance of dynamic elements and process noise, it is possible to recover the typical inner product attention.