Daily Arxiv

This page organizes papers related to artificial intelligence published around the world.
This page is summarized using Google Gemini and is operated on a non-profit basis.
The copyright of the paper belongs to the author and the relevant institution. When sharing, simply cite the source.

Type-Compliant Adaptation Cascades: Adapting Programmatic LM Workflows to Data

Created by
  • Haebom

Author

Chu-Cheng Lin, Daiyi Peng, Yifeng Lu, Ming Zhang, Eugene Ie

Outline

Reliably constructing large-scale language models (LLMs) for complex, multi-step workflows is a critical challenge. Conventional approaches, such as optimizing individual prompts in a pipeline, struggle to achieve the formal compliance required for structured tasks. In this paper, we introduce the Type-Compliant Adaptation Cascades (TACs) framework, which reinterprets workflow adaptation as typed probabilistic program learning. TACs treats the entire workflow, consisting of a parameter-efficiently adapted LLM and deterministic logic, as a non-normalized joint distribution. This enables principled gradient-based learning even in potential intermediate structures. Furthermore, we demonstrate that optimization bias disappears as the model learns type compliance, providing a theoretical basis for an efficient optimization objective. Experimentally, TACs outperforms state-of-the-art prompt-optimization-based models. Specifically, on structured tasks, FinQA improved from 12.0% to 24.7% on the Qwen 3 8B model, MGSM-SymPy improved from 57.1% to 75.9% on the Gemma 2 27B model, MGSM improved from 1.6% to 27.3% on the Gemma 7B model, and MuSR improved from 36.5% to 62.6%. TACs provide a robust and theoretically supported paradigm for developing reliable and task-compliant LLM systems.

Takeaways, Limitations

Takeaways:
A new framework for LLM configuration for complex workflows is presented.
Significant performance improvements over existing methods in structured tasks.
Presenting a theoretical basis for solving the optimization bias problem through type compliance.
Learning potential through gradient-based training.
Limitations:
The specific Limitations is not specified in the paper.
👍