[공지사항]을 빙자한 안부와 근황 
Show more

Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

A Simple Baseline for Stable and Plastic Neural Networks

Created by
  • Haebom

Author

Etienne K unzel, Achref Jaziri, Visvanathan Ramesh

Outline

In this paper, we present a novel method called RDBP to address the problem that existing models in continuous learning tend to favor either plasticity or stability. RDBP combines two complementary mechanisms: ReLUDown, which maintains feature sensitivity while preventing neurons from quiescent, and Decreasing Backpropagation, which gradually protects early layers from abrupt updates. When evaluated on the Continual ImageNet benchmark, RDBP performs equally or better than state-of-the-art methods in both plasticity and stability, while also reducing the computational cost. Therefore, RDBP is a practical solution for real-world continuous learning and provides a clear benchmark for evaluating future continuous learning strategies.

Takeaways, Limitations

Takeaways:
We present a new method (RDBP) that effectively addresses the trade-off between flexibility and stability in existing continuous learning methods.
RDBP achieves comparable or superior performance compared to state-of-the-art methods while reducing computational cost.
Providing practical solutions to real-world continuous learning problems.
Provides a new benchmark for future continuous learning research.
Limitations:
In this paper, the Limitations of RDBP is not explicitly presented. Additional experiments or analyses are needed to clarify the Limitations. For example, additional studies may be needed on generalization performance for specific types of datasets or tasks, performance in other continuous learning settings, etc.
👍