In this paper, we present Generative-AI-Enabled HMS (G-AI-HMS) aimed at improving the quality of human motion simulation (HMS) for cost-effective assessment of worker behavior, safety, and productivity in industrial work environments. G-AI-HMS improves the simulation quality of physical tasks by integrating text-to-text and text-to-motion models. The main challenges are (1) converting task descriptions into gesture recognition language using a large-scale language model aligned with the vocabulary of MotionGPT, and (2) validating AI-enhanced motions with real human movements using computer vision. We apply a pose estimation algorithm to real-time videos to extract joint landmarks, and compare them with AI-enhanced sequences using a motion similarity metric. In a case study of eight tasks, we show that AI-enhanced motions outperform human-generated descriptions in most scenarios, and outperform human-generated descriptions in six tasks based on spatial accuracy, in four tasks based on alignment after pose normalization, and in seven tasks based on overall temporal similarity. Statistical analysis showed that AI-enhanced prompts significantly (p < 0.0001) reduced joint errors and temporal alignment errors while maintaining comparable pose accuracy.