This paper proposes a novel approach to improve the efficiency of existing Transformer architectures applied to graphs and meshes for morphological analysis tasks. Existing methods employ traditional attention layers that heavily utilize spectral features, requiring expensive eigenvalue decomposition-based methods. To encode mesh structures, these methods derive positional embeddings, which rely heavily on eigenvalue decomposition-based operations from the Laplacian matrix or column kernel signatures, and then concatenate them to input features. This paper presents a novel approach inspired by the explicit construction of the Hodge Laplacian operator in discrete outer calculus, which is expressed as the product of the discrete Hodge operator and the outer derivative ($L := \star_0^{-1} d_0^T \star_1 d_0$). This paper adapts the Transformer architecture to a new deep learning layer that approximates the Hodge matrices $\star_0$, $\star_1$, and $\star_2$ using a multi-headed attention mechanism and learns a family of discrete operators L acting on mesh vertices, edges, and faces. Our approach creates a computationally efficient architecture that achieves comparable performance on mesh segmentation and classification tasks through a direct learning framework, without expensive eigenvalue decomposition operations or complex preprocessing operations.