This paper explores the ability of large-scale language models (LLMs) to perform new tasks through contextual learning. This paper explores the generalization mechanism within LLMs using off-by-one additions (1+1=3, 2+2=5, 3+3=?). Using circuit-style analysis techniques, we analyze the model's internal computation and uncover the principle by which the model generalizes from standard addition to off-by-one addition. Specifically, we discover the +1 function induction mechanism, the +1 function generation through multiple attention heads in parallel, and the reuse of this mechanism across various tasks (e.g., shifted multiple-choice QA and octal addition).