This paper aims to advance Persian text embedding research and presents Hakim, a novel Persian text embedding model that achieves an 8.5% performance improvement over existing approaches. Hakim outperforms existing Persian models on the FaMTEB benchmark and introduces three new datasets (Corpesia, Pairsia-sup, and Pairsia-unsup) for supervised and unsupervised learning. Furthermore, it is designed to be suitable for retrieval tasks that integrate message history within chatbots and augmented search generation (RAG) systems. A new baseline model based on the BERT architecture is also proposed, demonstrating higher accuracy on several Persian NLP tasks. A RetroMAE-based model has proven particularly effective for text information retrieval applications.