This paper investigates the effectiveness of parameter manipulation attacks (e.g., fault injection) on large-scale language models (LLMs) with improved efficiency using low-precision quantization. In particular, we propose gradient-based attacks, i.e., bit-wise search algorithm and word-wise attack, and evaluate them on Llama-3.2-3B, Phi-4-mini, and Llama-3-8B models under FP16 (baseline), FP8, INT8, and INT4 quantization schemes. The experimental results show that the attack success rates vary significantly depending on the quantization scheme. The attack success rate is high on the FP16 model, but significantly low on the FP8 and INT8 models. In addition, the successful attack on the FP16 model maintains its high success rate after FP8/INT8 quantization, but the success rate decreases significantly on the INT4 model. This suggests that although general quantization techniques such as FP8 increase the difficulty of direct parameter manipulation attacks, vulnerabilities can still exist, especially through post-attack quantization.