In this paper, we propose Learnable Latent Codes as Bridges (LCB), a novel approach to overcome the limitations of using large-scale language models (LLMs) as an interface layer to address the need for a well-defined interface layer for communication between high-level task planners and low-level policies in robot control. Existing LLM-based approaches have limitations in that they are difficult to express in natural language (e.g., dance moves) or in that transfer learning is difficult due to domain shift and catastrophic forgetting. LCB uses learnable latent codes as a bridge between LLMs and low-level policies, allowing LLMs to flexibly convey goals without linguistic constraints and enabling transfer learning without destroying the embedding space of pre-learned word tokens during transfer learning. Through the Language Table and Calvin benchmarks, we experimentally verify that LCB outperforms existing approaches (including GPT-4V) that use pure languages as an interface layer in tasks requiring inference and multi-step actions.