To overcome the limitations of existing Quality-Diversity algorithms, this paper proposes Vector Quantized-Elites (VQ-Elites), a novel unsupervised learning-based algorithm. VQ-Elites utilizes Vector Quantized Variational Autoencoders to automatically generate action space grids without prior task knowledge. Unlike existing methods, VQ-Elites generates structured action space grids, enhancing flexibility and applicability. Furthermore, we improve algorithm performance by introducing action space boundaries and collaboration mechanisms. We also propose new metrics, Effective Diversity Ratio and Coverage Diversity Score, to quantify diversity in unsupervised learning environments. Experimental results on tasks such as robot arm posture control, mobile robot spatial exploration, and MiniGrid navigation demonstrate the efficiency, adaptability, scalability, and robustness to hyperparameters of VQ-Elites.