In this paper, we propose a framework for scalable and cost-effective deployment of Graph-based Retrieval Augmented Generation (GraphRAG) in enterprise environments. Existing GraphRAG has been limited in its adoption due to its high computational cost and latency, so we present two key innovations: (1) a dependency-based knowledge graph construction pipeline that extracts entities and relationships from unstructured text by leveraging industry-grade NLP libraries without relying on large-scale language models (LLMs), and (2) a lightweight graph search strategy that combines hybrid query node identification and efficient one-step traversal to extract subgraphs with high recall and low latency. Experimental results using the SAP dataset demonstrate up to 15% (LLM-as-Judge) and 4.35% (RAGAS) performance improvement over existing RAG baseline models, and achieve 94% of the performance of LLM-based knowledge graphs (61.87% vs. 65.83%), while significantly reducing the cost and improving scalability. This demonstrates the feasibility of a practical, explainable, and domain-adaptive Retrieval-Augmented Reasoning system.