This paper addresses the challenges of developing a cyberbullying (CB) detection system for online users, including children. Specifically, we propose a method for generating synthetic data and labels using a large-scale language model (LLM) to address the lack of labeled data reflecting children's language and communication styles. Experimental results show that a BERT-based CB classifier trained on synthetic data generated via LLM achieves comparable performance (75.8% accuracy vs. 81.5% accuracy) to a classifier trained on real data. Furthermore, LLM is also effective for labeling real-world data, with the BERT classifier achieving comparable performance (79.1% accuracy vs. 81.5% accuracy). This suggests that LLM can be a scalable, ethical, and cost-effective solution for generating cyberbullying detection data.