In this paper, we present Minnow (Meta-training for In-context Learning of Words), a novel method for enhancing small-shot word learning. This approach builds on the human ability to rapidly learn new words from a small number of examples and use them flexibly across diverse contexts. Minnow trains a language model to generate examples of new words using special placeholder tokens. The key is to develop general word learning abilities by repeatedly training a diverse set of new words. Experimental results demonstrate that Minnow, trained from scratch on child language data, achieves small-shot word learning performance comparable to that of a large-scale language model (LLM) pre-trained with significantly more data. Furthermore, fine-tuning Minnow on a pre-trained LLM improves the ability to segment new words, identify syntactic categories, and generate new usage examples and definitions. These results demonstrate Minnow's data efficiency and its potential to enhance language model performance in word learning tasks.