As interest in fitness grows, Action Quality Assessment (AQA) technology is gaining importance. However, existing AQA methods are limited to single-view, RGB modality, and competitive sports scenarios. In this paper, we propose FLEX, the first large-scale multimodal, multi-action dataset that integrates sEMG signals into AQA. FLEX includes high-precision MoCap, RGB video in five views, 3D pose, sEMG, and physiological information, representing 20 weight-loading actions performed 10 times by 38 subjects at three skill levels. Furthermore, FLEX integrates a knowledge graph into AQA to build annotation rules in the form of a penalty function. Experiments with various underlying methodologies demonstrate that multimodal, multi-view data and fine-grained annotations improve model performance.