This paper emphasizes the importance of generative control for safe and reliable deployment of large-scale language models (LLMs), and introduces the research trend of latent steering, a lightweight technique, in addition to the existing prompt engineering and fine-tuning. However, it points out that the effect of existing latent steering is limited, and suggests standardized evaluation criteria for various actions to improve it. Based on this, we propose Instruction Attention Boosting (InstABoost), a novel latent steering technique that amplifies the effect of prompts by controlling the model's attention during the generation process. InstABoost combines the advantages of existing approaches and builds on previous studies that attention manipulation can control the compliance with contextual rules in transformer-based models. Experimental results show that InstABoost outperforms existing prompting and latent steering techniques in terms of control performance.