This paper proposes LensGNN, a novel model that leverages a large-scale language model (LLM) to address the Limitations of graph neural networks (GNNs), which have emerged as powerful models for learning graph-structured data. Existing GNNs have limitations due to their inability to understand the semantics of rich text node attributes, and we observed that a specific GNN does not consistently perform well across diverse datasets. LensGNN aligns the representations of multiple GNNs by mapping them to the same space, and then aligns the space between the GNNs and LLMs through LoRA fine-tuning. By injecting graph tokens and text information into the LLM, multiple GNNs are ensembled and leverage the strengths of the LLM, enabling a deeper understanding of text semantics and graph structure. Experimental results demonstrate that LensGNN outperforms existing models, providing a powerful and superior solution for integrating semantic and structural information, advancing text attribute graph ensemble learning.