This paper presents an experimental study that applies an automatic prompt optimization technique instead of manually writing prompts in knowledge graph (KG) construction using a large-scale language model (LLM). We focus on the fundamental task of extracting 3-tuples (subject-relation-object) from text, and compare the performance of three automatic prompt optimization techniques (DSPy, APE, and TextGrad) under various settings (prompting strategy, LLM model, schema complexity, input text length and diversity, optimization index, and dataset) using two datasets, SynthIE and REBEL. The experimental results show that the automatic prompt optimization technique achieves performance similar to that of human-written prompts, and that the performance improvement becomes more pronounced as schema complexity and text length increase.