This paper presents UAR-NVC (Unified AutoRegressive Framework for memory-efficient Neural Video Compression), a novel framework that applies the frame-by-frame processing of existing video compression frameworks to INRs to address the memory consumption issue in video compression using Implicit Neural Representations (INRs). UAR-NVC integrates INR-based and existing video compression frameworks from a temporal autoregressive modeling perspective by segmenting a video into multiple clips and using a different INR model instance for each clip. We design two modules to optimize the initialization, training, and compression of model parameters to reduce temporal redundancy between clips. The latency can be adjusted by varying the clip length, and experimental results show improved performance compared to various baseline models.