This paper analyzes the vulnerability of large-scale language models (LLMs) and vision-language models (VLMs) to identify the cause of their high sensitivity to small changes. We propose a new stability metric, First-order local influence (FI), based on information geometry to quantify the sensitivity of individual parameters and input dimensions (pixel or token embeddings). By analyzing various LLMs and VLMs with parameters ranging from 1.5 billion to 13 billion, we reveal that a small number of parameters or input dimensions with high FI values disproportionately contribute to the vulnerability of the models, and show that mitigating the influence of these vulnerable parameters contributes to improved model performance.