The authors study whether pretraining a small-scale decoder language model (LM) in an environment with limited memory and latency enables rapid adaptation to unseen languages and zero-shot transfer. Specifically, they use a method that replaces some of the model's autoregressive objectives with first-order Model-Agnostic Meta-Learning (MAML). Experiments are conducted on Tagalog and Cebuano, and they demonstrate that MAML improves zero-shot micro-F1 scores and reduces convergence times.