This paper presents a novel framework to improve the performance of quantum convolutional neural networks (QuNNs) by introducing trainable quanvolutional layers. Conventional quantum convolutional layers are useful for feature extraction but are static and thus have limited adaptability. This study overcomes this limitation and significantly enhances the flexibility and potential of QuNNs by enabling training within layers. However, introducing multiple trainable quantum convolutional layers complicates gradient-based optimization, especially due to the difficulty of accessing gradients across layers. To address this, this paper proposes a novel architecture called residual quantum convolutional neural networks (ResQuNNs) that utilizes the concept of residual learning. It facilitates gradient flow by adding skip connections between layers. By inserting residual blocks between quantum convolutional layers, it ensures improved gradient access throughout the network, thereby improving training performance. In addition, we provide experimental evidence for the strategic placement of these residual blocks within QuNNs. Through extensive experiments, we verify an efficient configuration of residual blocks that enables gradients across all layers of the network, thereby enabling efficient training. The results suggest that the precise location of the residual block plays a critical role in maximizing the performance improvement of QuNNs. The results of this study provide significant progress in the development of quantum deep learning, opening up new avenues for both theoretical development and practical quantum computing applications.