This paper presents ACING, an automated prompt optimization technique for improving the performance of large-scale language models (LLMs). ACING, a reinforcement learning-based framework that operates even in black-box environments where the LLM's parameters and gradients are inaccessible, formulates prompt optimization as a stateless continuous-action problem, exploring an infinite prompt space. Experimental results show that ACING generates prompts that outperform human-generated prompts 76% of the time across a variety of tasks (instruction-induction, summarization, and thought-chain inference), achieving up to 33 points and a median performance improvement of 10 points over the best automated baseline model. Extensive additional experiments confirm the robustness and efficiency of ACING. The source code is available on GitHub.