This paper proposes EcoTransformer, a novel Transformer architecture, to address the high computational complexity and energy consumption of the extended dot-product attention mechanism of the existing Transformer. EcoTransformer generates output context vectors through convolution with a Laplacian kernel, and the distance between queries and keys is measured using the L1 metric. Unlike dot-product-based attention, EcoTransformer eliminates matrix multiplication, significantly reducing computational complexity. It performs similarly or better than the existing extended dot-product attention in NLP, bioinformatics, and vision tasks, while significantly reducing energy consumption.