This study investigates unlearning, a "forgetting" technique, to address privacy and copyright issues in large-scale language models (LLMs) and large-scale multimodal models (LMMs). Specifically, addressing the lack of a practical evaluation framework for LMM unlearning, we propose the PULSE protocol. This protocol evaluates LMM unlearning in realistic scenarios by introducing two perspectives: (i) unlearning pre-trained knowledge and (ii) assessing long-term sustainability. Our results demonstrate that while existing unlearning techniques can remove knowledge acquired through fine-tuning, they struggle to remove information acquired during pre-training. Furthermore, techniques that successfully unlearn on a single task suffer from performance degradation when sequentially unlearning the same data.