In this paper, we propose SI-Agent, a novel agent-based framework for automatic generation and iterative improvement of system instructions (SI) guiding large-scale language models (LLMs). SI-Agent consists of three agents: an instructor agent, an instruction-following agent (target LLM), and a feedback/reward agent, which generate and improve human-readable SI through feedback-driven iterative learning. The instructor agent modifies SI according to feedback, using LLM-based editing or evolutionary algorithms. Experimental results demonstrate that SI-Agent outperforms existing methods in terms of performance and interpretability.