AGENTIQL is an agent-based multi-expert framework proposed to overcome the limitations of existing architectures that struggle with complex inference and schema diversity. It combines an inference agent for question decomposition, a coding agent for subquery generation, and a refinement stage for column selection. An adaptive router balances efficiency and accuracy by selecting between a modular pipeline and a base parser. Multiple stages of the pipeline can be executed in parallel, allowing for scalability to larger workloads. On the Spider benchmark, AGENTIQL improves execution accuracy and interpretability, achieving up to 86.07% EX with 14 billion models using a Planner & Executor merge strategy. Performance depends on the efficiency of the routing mechanism, and the smaller open-source LLM narrowed the gap with the GPT-4-based state-of-the-art (89.65% EX). AGENTIQL exposes intermediate inference steps for transparency, providing a robust, scalable, and interpretable approach to semantic parsing.