This paper proposes LiteASR to address the high computational intensity of encoders, which hinders the efficient deployment of state-of-the-art automatic speech recognition (ASR) models such as OpenAI's Whisper. LiteASR is a low-rank compression technique for ASR encoders that leverages the robust low-rank features observed in intermediate activations. Using a small calibration dataset, we approximate linear transformations with low-rank matrix multiplication chains via principal component analysis (PCA) and further optimize the self-attention mechanism to operate in reduced dimensionality. Experimental results demonstrate that LiteASR compresses the encoder size of Whisper large-v3 by more than 50%, achieving comparable but higher accuracy than Whisper medium, thereby establishing a new Pareto optimality between accuracy and efficiency. The source code is available on GitHub.