This paper addresses the dual goals of improving reproducibility and accountability in machine learning research, noting that these two goals are discussed in different contexts: reproducibility based on scientific reasoning and accountability based on ethical reasoning. Specifically, to address the "responsibility gap," where machine learning scientists are often held accountable due to their remoteness from applied research, we propose the concept of claim replicability, rather than model performance reproducibility. We argue that claim reproducibility is useful for holding machine learning scientists accountable when they make non-reproducible claims that could lead to harm due to misuse or misunderstanding. To this end, we define two types of reproducibility and present their advantages. Furthermore, we frame the implementation of claim reproducibility as a social project, not a technical challenge, and discuss competing epistemological principles, circulating reference, interpretative labor, and the practical implications for research communication.