This paper addresses the limitations of existing general-purpose LLMs for integrating Large-Scale Language Models (LLMs) into Open Radio Access Networks (O-RAN) and introduces the ORANSight-2.0 initiative to develop an O-RAN-specific baseline LLM. ORANSight-2.0 improves the performance of O-RAN-specific tasks by fine-tuning 18 models with parameter ranges from 1B to 70B based on five open source LLM frameworks, Mistral, Qwen, Llama, Phi, and Gemma. In particular, a new Retrieval-Augmented Generation (RAG)-based instruction coordination framework called RANSTRUCT is used to generate a high-quality instruction coordination dataset, which is then fine-tuned using QLoRA. For performance evaluation, a new srsRAN-based benchmark, srsRANBench, is proposed.