This paper presents a comprehensive technical survey of how to leverage Large Language Models (LLMs) to address key challenges in modern recommender systems. To overcome the limitations of existing recommender systems, we explore LLM-based architectures, including prompt-based candidate generation, language-based ranking, Retrieval-Augmented Generation (RAG), and conversational recommendation. These architectures enhance personalization, semantic alignment, and interpretability, and can operate effectively in cold-start and long-tail scenarios without extensive task-specific supervision.