This paper proposes AutoPDL, an automated prompt optimization technique for improving the performance of large-scale language models (LLMs). AutoPDL defines the problem of combining various prompting patterns (e.g., Zero-Shot, CoT, ReAct, ReWOO) with prompt content, including several examples, as a structured AutoML problem. It efficiently finds optimal prompt configurations using the Successive Halving technique. Leveraging a library of prompting patterns implemented using the PDL prompt programming language, AutoPDL generates human-readable, editable, and executable PDL programs. Evaluation results on three tasks and seven LLMs (ranging from 3 billion to 70 billion parameters) demonstrate an average accuracy improvement of 9.21±15.46 percentage points (up to 67.5 percentage points). The selected prompting strategy varies across models and tasks.