This paper proposes MLLM-Fabric, a robotics framework that leverages a multimodal large-scale language model (MLLM), which plays a crucial role in selecting suitable fibers to meet functional and quality requirements in robotic textile manufacturing, apparel production, and smart retail. This system is trained to rank fiber attributes using supervised learning fine-tuning and explanation-based distillation. Furthermore, we release a dataset of 220 diverse fibers, synchronized with RGB images and visual-tactile and pressure data. Fabric-Llama-90B consistently outperforms pretrained vision-language-based models in both attribute ranking and selection confidence.