In this paper, we present the results of a study that, in addition to the traditional token-level copying induction head, we discovered a concept-level induction head that copies entire lexical units. The concept-level induction head learns by paying attention to the ends of multi-token words, and copies meaningful text in parallel with the token-level induction head. The paper shows that the concept-level induction head is responsible for semantic tasks such as word-level translation, while the token-level induction head is essential for tasks that require literal copying, such as nonsensical token copying. The two paths operate independently, and removing the token-level induction head causes the model to paraphrase instead of copying literally. We patch and analyze the output of the concept-level induction head and find that it contains word representations that are independent of language and form, suggesting that large-scale language models represent abstract word meanings independent of language and form.