This paper highlights the need for a paradigm shift toward more efficient approaches to address the challenging deployment problems of computationally and memory-intensive deep neural network-based Continual Learning (CL) methods. In particular, Neuromorphic Continual Learning (NCL) is emerging, which leverages the principles of Spiking Neural Networks (SNNs) to enable efficient CL algorithms to be implemented in dynamically changing environments on resource-constrained computing systems. This paper aims to provide a comprehensive study of NCL. First, we provide a detailed background on CL, followed by its requirements, settings, metrics, scenario classification, Online Continual Learning (OCL) paradigm, and recent DNN-based methods to address catastrophic forgetting (CF). Then, we analyze these methods in terms of CL requirements, computational and memory costs, and network complexity, and emphasize the need for energy-efficient CL. After that, we provide a background on low-power neuromorphic systems, including encoding techniques, neuron dynamics, network architectures, learning rules, hardware processors, software and hardware frameworks, datasets, benchmarks, and evaluation metrics. We then comprehensively review and analyze the state-of-the-art in NCL, presenting key ideas, implementation frameworks, and performance evaluations. We cover several hybrid approaches that combine supervised and unsupervised learning paradigms, and optimization techniques including SNN computation reduction, weight quantization, and knowledge distillation. We also discuss the progress of practical NCL applications and provide a future outlook on open research challenges on NCL to inspire future research on useful and biologically plausible OCL for the broader neuromorphic AI research community.