This paper focuses on the role of generative AI (GenAI) in autonomous optimization of next-generation wireless networks, and in particular on the generation of xApps and rApps using Large Language Models (LLMs) within the Open RAN (O-RAN) architecture. To address the high cost and resource consumption of conventional LLM fine-tuning approaches, we propose a Retrieval-Augmented Generation (RAG) technique. In particular, we compare and evaluate vector-based RAG, GraphRAG, and Hybrid GraphRAG to measure their performance (fidelity, answer relevance, contextual relevance, and factual accuracy) under various question complexities based on the O-RAN specifications. The experimental results show that GraphRAG and Hybrid GraphRAG outperform conventional vector-based RAG, and in particular, Hybrid GraphRAG improves factual accuracy by 8% and GraphRAG improves contextual relevance by 7%.