Existing pedestrian attribute recognition methods have been developed based on RGB cameras, but they are vulnerable to lighting conditions and motion blur and have limitations in considering emotional aspects. This paper proposes a multimodal RGB-event pedestrian attribute recognition task utilizing an event camera, which boasts low-light performance, high speed, and low power consumption. We release EventPAR, a large-scale multimodal pedestrian attribute recognition dataset containing 100K RGB-event samples and covering 50 attributes related to appearance and six emotions. We retrain and evaluate existing PAR models to establish a benchmark, and propose a multimodal pedestrian attribute recognition framework based on RWKV. State-of-the-art results are achieved through experiments on the proposed dataset, MARS-Attribute, and DukeMTMC-VID-Attribute simulation datasets. The source code and dataset will be made available on GitHub.