Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

System~2 Reasoning for Human--AI Alignment: Generality and Adaptivity via ARC-AGI

Created by
  • Haebom

Author

Sejin Kim, Sundong Kim

Outline

This paper highlights that Transformer-based models still lack the generality and adaptability required for human-AI coordination. By examining weaknesses in the ARC-AGI task, we uncover differences in constructive generalization and novel rule adaptation, and argue that resolving these gaps requires a revamped inference pipeline and its evaluation. We propose three research directions: a symbolic representation pipeline for constructive generality, an interactive feedback-based inference loop for adaptability, and test-time task augmentation that balances both characteristics. Finally, we demonstrate how ARC-AGI's evaluation tools can be used to track progress in symbolic generality, feedback-based adaptability, and task-level robustness, guiding future research on robust human-AI coordination.

Takeaways, Limitations

Takeaways:
We clearly present the limitations of System 2 inference in Transformer-based models and suggest directions for improvement.
It suggests research directions for constructive generalization and new rule adaptation.
We present a method for tracking the progress of human-AI coordination research using the ARC-AGI assessment tool.
Emphasizes the importance of symbolic representation, interactive feedback, and test-time task augmentation.
Limitations:
There is a lack of detailed explanation of how the three proposed research directions can be implemented concretely.
There is insufficient discussion of Limitations of the ARC-AGI assessment tool.
There is no experimental verification of the practical effectiveness of the proposed method.
👍