This paper highlights that graph neural networks (GNNs), which are effective at learning graph-structured data, lack the ability to understand the semantic properties of rich text node attributes. We observe that existing GNN models fail to consistently perform well across diverse datasets. To address this, we propose the LensGNN model, which utilizes a large-scale language model (LLM) as an ensemble of multiple GNNs. LensGNN maps the representations of multiple GNNs to the same space, aligns the spaces between the GNNs and LLMs through LoRA fine-tuning, and injects graph tokens and textual information into the LLM. This ensemble of multiple GNNs leverages the strengths of LLMs to deepen the understanding of textual semantics and graph structural information. Experimental results demonstrate that LensGNN outperforms existing models. This research advances text attribute graph ensemble learning by providing a robust and superior solution for integrating semantic and structural information. The code and data are available on GitHub.