Text embeddings are a core component of many NLP tasks, such as classification, regression, clustering, and semantic retrieval. This paper systematically outlines methods specializing in the interpretation and similarity description of text embeddings, characterizing key ideas, approaches, and tradeoffs in this research field. It also compares evaluation methods, discusses overall lessons learned, and identifies opportunities and challenges for future research.