This paper presents a single-channel speech enhancement technique, MambAttention, which combines Mamba and a shared time-frequency multi-head attention module. We train the MambAttention model on the VB-DemandEx dataset and demonstrate that it outperforms existing LSTM, xLSTM, Mamba, and Conformer-based systems on two domain-external datasets: DNS-2020 and EARS-WHAM_v2.