This paper presents TAGAL, a novel methodology for generating synthetic tabular data using large-scale language models (LLMs). TAGAL automates an iterative feedback process through an agent-based workflow to improve data quality without additional LLM training. LLMs allow for the integration of external knowledge into the data generation process, and we evaluate TAGAL's performance across a variety of datasets and quality aspects. We analyze the utility of downstream ML models by training classifiers solely on synthetic data or by combining real and synthetic data, and compare the similarity between real and generated data. Consequently, TAGAL demonstrates comparable performance to state-of-the-art techniques that require LLM training and outperforms other techniques that do not. This highlights the potential of agent-based workflows and suggests new directions for LLM-based data generation.