Inspired by the human ability to rapidly learn new words from a small number of examples and use them flexibly across diverse contexts, this paper presents Minnow (Meta-training for In-context Learning of Words), a novel method for improving word learning capabilities within a few attempts of a language model. Minnow trains a language model to generate examples of new words using special placeholder tokens. Repeated training on a variety of new words develops general word learning capabilities. Experimental results demonstrate that training a language model from scratch with Minnow using child language data achieves word learning capabilities comparable to those of large-scale language models (LLMs) pre-trained with much more data within a few attempts. Furthermore, fine-tuning Minnow on a pre-trained LLM improves the ability to segment new words, identify their syntactic categories, and generate new usage examples and definitions based on a few contextual examples. This highlights Minnow's data efficiency and its potential to enhance language model performance in word learning tasks.