The specialized terminology and nuanced concepts of the telecommunications industry continue to pose challenges to existing natural language processing (NLP) models. This paper presents the Telecom Vectorization Model (T-VEC), a domain-adaptive embedding model built on the gte-Qwen2-1.5B-instruct backbone to effectively represent telecommunications-specific semantics. T-VEC is fine-tuned using triplet loss using the large-scale telecommunications-related dataset T-Embed. T-VEC outperforms MPNet, BGE, Jina, and E5 on a custom benchmark consisting of 1,500 query-fingerprint pairs from IETF RFCs and vendor manuals, demonstrating superior domain-awareness and semantic precision in telecommunications-specific retrieval. By releasing T-VEC and its tokenizer, we enable semantically faithful NLP applications in the telecommunications domain.