Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Tversky Neural Networks: Psychologically Plausible Deep Learning with Differentiable Tversky Similarity

Created by
  • Haebom

Author

Moussa Koulako Bala Doumbouya, Dan Jurafsky, Christopher D. Manning

Outline

This paper points out that the geometric similarity model used in deep learning lacks psychological validity and proposes a differentiable parameterization for applying Tversky's feature-set-based similarity model to deep learning. Through this, we develop novel neural network components, such as the Tversky projection layer, and demonstrate performance improvements over conventional linear projection layers through various experiments, including image recognition and language modeling. Furthermore, we interpret the two types of projection layers as computing the similarity between input stimuli and learned prototypes, and propose a novel visualization technique that highlights the interpretability of the Tversky projection layer.

Takeaways, Limitations

Takeaways:
Integrating psychological similarity theory into deep learning to enhance model interpretability.
Developed a new neural network component, the Tversky projection layer, which can replace the existing linear projection layer.
Shows improved performance in image recognition and language modeling tasks.
Provides a framework that can unify the interpretation of two types of projection layers.
A new visualization technique is proposed to improve the interpretability of Tversky projection layers.
Limitations:
No specific mention of Limitations in the paper (not available in the abstract)
👍