Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

MINERVA: Mutual Information Neural Estimation for Supervised Feature Selection

Created by
  • Haebom

Author

Taurai Muvunza, Egor Kraev, Pere Planell-Morell, Alexander Y. Shestopaloff

Outline

This paper introduces MINERVA, a novel supervised feature selection method based on neural estimation of mutual information to model feature-target relationships. Given that conventional feature filters can fail for targets that rely on higher-order feature interactions rather than individual feature contributions, we perform feature selection using a carefully designed loss function augmented with a regularization term that induces sparsity and a neural network to parameterize the approximation of mutual information. MINERVA is implemented as a two-step process to separate representation learning and feature selection, thereby improving generalization performance and more accurately representing feature importance. The method demonstrates its ability to achieve accurate results through experiments on synthetic and real-world fraud datasets.

Takeaways, Limitations

Takeaways:
Effectively captures complex feature-target relationships by considering higher-order feature interactions.
Improve generalization performance by separating representation learning and feature selection.
We demonstrate its effectiveness through experiments on synthetic and real-world datasets.
It can perform exact solutions.
Limitations:
The paper does not specifically mention Limitations.
👍