Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

On Convolutions, Intrinsic Dimensions, and Diffusion Models

Created by
  • Haebom

Author

Kin Kwan Leung, Rasa Hosseinzadeh, Gabriel Loaiza-Ganem

Outline

Based on the manifold hypothesis, which states that data in a high-dimensional space reside in a low-dimensional submanifold, the diffusion model (DM) implicitly learns the local intrinsic dimension (LID) for each data point in this submanifold. Kamkari et al. (2024b) proposed FLIPD, which estimates the LID through the rate of change of the logarithmic neighborhood density of the DM. This paper formally proves the theoretical basis of FLIPD under realistic conditions and shows that similar results are obtained by replacing Gaussian convolution with uniform convolution.

Takeaways, Limitations

Takeaways:
We demonstrate the accuracy of FLIPD under realistic conditions, strengthening the theoretical basis for LID estimation.
We extend the generality of the methodology by showing that FLIPD works not only for Gaussian convolution but also for uniform convolution.
It can contribute to various application fields such as outlier detection and adversarial example detection using LID estimation technology.
Limitations:
No specific Limitations was presented outside of the paper.
👍