This paper proposes a novel static word embedding optimized for representing sentence meaning. Word embeddings are extracted from a pre-trained Sentence Transformer, refined through sentence-level principal component analysis, and then knowledge distillation or contrastive learning is applied. During inference, sentences are represented by simply averaging the word embeddings, which requires minimal computational overhead. Evaluating the model on monolingual and cross-lingual tasks, we demonstrate that the proposed model significantly outperforms existing static models on sentence semantic tasks, even outperforming a basic Sentence Transformer model (SimCSE). Furthermore, our analysis demonstrates that the proposed methodology successfully removes irrelevant word embedding components and adjusts the vector norm based on the word's influence on sentence meaning.