This paper presents a novel approach to overcome the limitations of existing Transformer architectures used for graph- and mesh-based morphological analysis tasks. Existing approaches rely heavily on spectral features, typically utilizing traditional attention layers that require expensive eigenvalue decomposition-based methods. To encode mesh structures, these methods derive positional embeddings that rely heavily on eigenvalue decomposition operations based on the Laplacian matrix or column kernel signatures, and associate these with input features. Inspired by the discrete outer calculus, which explicitly constructs the Hodge Laplacian operator as a product of discrete Hodge operators, $(L := \star_0^{-1} d_0^T \star_1 d_0)$ , we adapt the Transformer architecture to a new deep learning layer that approximates the Hodge matrices $\star_0$, $\star_1$, and $\star_2$ using a multi-head attention mechanism and learns a family of discrete operators $L$ acting on mesh vertices, edges, and faces. Our approach provides a computationally efficient architecture that achieves comparable performance on mesh segmentation and classification tasks through a direct learning framework without expensive eigenvalue decomposition operations or complex preprocessing operations.