Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Demystifying Chains, Trees, and Graphs of Thoughts

Created by
  • Haebom

Author

Maciej Besta, Florim Memedi, Zhenyu Zhang, Robert Gerstenberger, Guangyuan Piao, Nils Blach, Piotr Nyczyk, Marcin Copik, Grzegorz Kwa sniewski, J urgen M uller, Lukas Gianinazzi, Ales Kubicek, Hubert Niewiadomski, Aidan O'Mahony, Onur Mutlu, Torsten Hoefler

Outline

This paper explores how to improve the inference performance of large-scale language models (LLMs) through structured prompt engineering. We analyze structured prompt designs, including Chain-of-Thought, Tree of Thought, and Graph of Thought, and present a general blueprint for effective and efficient LLM inference systems. Through an in-depth analysis of the prompt execution pipeline, we clarify the concepts and establish the first taxonomy of structure-based LLM inference systems. We define the structure employed here as an "inference topology" and analyze its representation, algorithm, performance, and cost patterns to compare existing prompting approaches. Furthermore, we present theoretical foundations, their relationship to knowledge bases, and related research challenges, aiming to contribute to the advancement of prompt engineering technology in the future.

Takeaways, Limitations

Takeaways:
We present a general blueprint and taxonomy for structured prompt engineering, suggesting directions for future research.
We compare and analyze the performance and cost of various structured prompting techniques to provide guidance for selecting the optimal design.
Contributes to the advancement of prompt engineering technology by providing an in-depth understanding of the LLM inference process.
Considering the connection with the knowledge base, we suggest directions for further improving the reasoning ability of LLM.
Limitations:
Further validation of the comprehensiveness and generalizability of the proposed classification scheme is needed.
Experimental analyses on various LLM architectures and datasets may be lacking.
Further research is needed to determine the practical applicability and scalability of the presented blueprint.
Further research is needed to draw conclusions about the superiority of any particular structured prompting technique.
👍