This paper points out that the demand for predictable LLM inference in complex systems has popularized structured outputs, but concerns remain about their poor performance compared to unstructured natural language. Training on unstructured Chain of Thought (CoT) trace data has led to new powerful inference models, but it raises computational costs and reliability issues. In this paper, we present iSelf-Discover, an instance-level adaptation of the Self-Discover framework, and compare dynamically generated structured JSON inferences with unstructured inferences. Experimental results on various benchmarks show that unstructured inferences consistently outperform structured inferences. In particular, on the complex MATH benchmark, unstructured plans achieve up to 18.90% relative performance gain over structured approaches. The zero-shot unstructured variant of iSelf-Discover outperforms the five-shot structured variant, highlighting that these differences are important even when inferences are dynamically generated ahead of the final answer. Furthermore, we show that the optimal plan generation granularity (instance-level vs. task-level) varies depending on the context. These results suggest that we need to re-evaluate our reliance on structured formats for solving complex problems and how we structure complex systems.