This paper proposes TransMiter, a lightweight adapter for efficient adaptive knowledge transfer of vision-language models (VLMs). TransMiter captures knowledge gaps between pre-trained and fine-tuned VLMs using an unsupervised learning approach, transferring knowledge without backpropagation. It consists of a small number of layers, has minimal inference cost, and adding a small amount of labeled data improves performance beyond the fine-tuned, robust model. Experimental results demonstrate that TransMiter effectively and efficiently transfers adaptive knowledge across VLMs of various sizes and architectures, while maintaining generalization capabilities.