Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning

Created by
  • Haebom

Author

Liujian Tang, Shaokang Dong, Yijia Huang, Minqi Xiang, Hongtao Ruan, Bin Wang, Shuo Li, Zhiheng Xi, Zhihui Cao, Hailiang Pang, Heng Kong, He Yang, Mingxu Chai, Zhilin Gao, Xingyu Liu, Yingnan Fu, Jiaming Liu, Xuanjing Huang, Yu-Gang Jiang, Tao Gui, Qi Zhang, Kang Wang, Yunke Zhang, Yuran Wang

Outline

MagicGUI is a fundamental mobile GUI agent designed to address the critical challenges of perception, foundation building, and inference in real-world mobile GUI environments. It consists of six core components: (1) a comprehensive and accurate dataset built through a scalable GUI data pipeline that aggregates the largest and most diverse GUI-centric multimodal data collected from open-source repositories, automated crawling, and targeted manual annotations; (2) enhanced perception and foundation building capabilities that facilitate fine-grained multimodal alignment for UI element reference, foundation building, and screen understanding; (3) a comprehensive and unified task space that encompasses both basic UI tasks and complex interaction intents, supporting human-agent interaction; (4) a plan-driven inference mechanism that enables the model to decompose complex user instructions into sequential actions using explicit intermediate meta-planning inference; and (5) an iterative two-step training procedure that combines large-scale continuous pretraining on 7.8 million samples with reinforcement learning fine-tuning utilizing spatially enhanced compound reward and double filtering strategies. (6) It achieves competitive performance on the proprietary Magic-RICH benchmark and over a dozen public benchmarks, achieving excellent performance across GUI perception and agent tasks, and demonstrating strong generalization and real-world deployability in real-world mobile GUI scenarios, as detailed in Figure 1.

Takeaways, Limitations

Takeaways:
Improving the performance of mobile GUI agents using large-scale multi-modal GUI datasets.
Accurate and efficient UI interactions through improved perception and foundation building capabilities.
The ability to perform complex tasks through plan-oriented reasoning mechanisms.
Strong generalizability and deployability in real-world mobile environments.
Achieve excellent performance in various benchmarks.
Limitations:
Possible loss of objectivity due to the in-house development of the Magic-RICH benchmark.
Possibility of poor generalization performance due to dataset bias.
Further research is needed on the ability to handle exceptions in real-world environments.
Potential increase in computational cost due to agent complexity.
👍