This paper proposes GraphTrafficGPT, a graph-based architecture, to improve the efficiency of intelligent traffic management systems based on large-scale language models (LLMs). Existing chain-based systems (e.g., TrafficGPT) have limitations in applying them to complex real environments due to sequential task execution, high token usage, and low scalability. GraphTrafficGPT uses a graph that represents tasks and their dependencies as nodes and edges to enable parallel execution and dynamic resource allocation. The core is a Brain Agent that decomposes user queries, constructs an optimized dependency graph, and coordinates a network of expert agents for data retrieval, analysis, visualization, and simulation. It efficiently processes interdependent tasks through context-aware token management and concurrent multi-query processing support. Experimental results show that GraphTrafficGPT reduces token consumption by 50.2%, average response delay by 19.0%, and improves concurrent multi-query execution efficiency by up to 23.0% compared to TrafficGPT.