This paper presents the potential of leveraging the physical reasoning capabilities of large-scale language models (LLMs) for human-robot interaction (HRI) in disaster relief situations. To address the size constraints of existing large LLMs, we propose a dataset and pipeline for generating a Field Reasoning and Instruction Decoding Agent (FRIDA) model. Combining the knowledge of domain experts and linguists, we generate high-quality, few-shot prompts, which are then used to fine-tune a small, instruction-tuned model using synthetic data. We experimentally demonstrate that a FRIDA model trained solely on object physical state and feature data outperforms models trained entirely on synthetic data and baseline models, demonstrating the ability to instill physical common sense with minimal data.