This paper presents a study on the application of large-scale language models (LLMs) to electronic design automation (EDA) tasks, specifically register-transfer level (RTL) code generation. While existing RTL datasets focus solely on syntactic validity and lack functional verification, VeriCoder is an RTL code generation model fine-tuned on a dataset verified for functional correctness. Using a novel methodology that combines unit test generation and feedback-driven refinement, we build a dataset of 125,777 functionally verified examples consisting of natural language specifications, RTL implementations, and passing tests. A teacher model based on GPT-4o-mini is used to generate unit tests, and the RTL design is iteratively refined based on simulation results. VeriCoder achieves state-of-the-art performance in functional correctness on VerilogEval and RTLLM, with performance improvements of up to 71.7% and 27.4% over existing models. Furthermore, we present experimental results demonstrating that models trained on a functionally verified dataset outperform models trained on an unverified dataset. The code, data, and models are publicly available.