Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

ORL-LDM: Offline Reinforcement Learning Guided Latent Diffusion Model Super-Resolution Reconstruction

Created by
  • Haebom

Author

Shijie Lyu

Outline

In this paper, we propose a reinforcement learning-based latent diffusion model (LDM) fine-tuning method for remote sensing image super-resolution. To overcome the limitations of existing deep learning-based methods in complex scene processing and image detail preservation, we build a reinforcement learning environment that optimizes the decision objective in the inverse denoising process of the LDM model using proximal policy optimization (PPO). Experimental results on the RESISC45 dataset show that the performance is improved by 3 to 4 dB in PSNR, 0.08 to 0.11 in SSIM, and 0.06 to 0.10 in LPIPS compared to the baseline model, showing that it is particularly effective in structured and complex natural scenes.

Takeaways, Limitations

Takeaways:
We experimentally demonstrate that a reinforcement learning-based LDM fine-tuning method can improve the performance of remote sensing image super-resolution.
It outperforms existing methods, especially in structured and complex natural scenes.
Significant performance improvements were achieved in all PSNR, SSIM, and LPIPS metrics.
Limitations:
The effectiveness of the proposed method is based on experimental results on a specific dataset (RESISC45), and the generalization performance on other datasets requires further study.
Reinforcement learning-based methods can be computationally expensive and may not be suitable for real-time processing.
Lack of detailed description of specific reinforcement learning environment settings (states, actions, rewards) requires review for reproducibility.
👍