Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

MagicGUI: A Foundational Mobile GUI Agent with Scalable Data Pipeline and Reinforcement Fine-tuning

Created by
  • Haebom

Author

Liujian Tang, Shaokang Dong, Yijia Huang, Minqi Xiang, Hongtao Ruan, Bin Wang, Shuo Li, Zhiheng Xi, Zhihui Cao, Hailiang Pang, Heng Kong, He Yang, Mingxu Chai, Zhilin Gao, Xingyu Liu, Yingnan Fu, Jiaming Liu, Xuanjing Huang, Yu-Gang Jiang, Tao Gui, Qi Zhang, Kang Wang, Yunke Zhang, Yuran Wang

Outline

MagicGUI is a foundational mobile GUI agent designed to address the critical challenges of perception, foundation building, and inference in real-world mobile GUI environments. This framework is built on six key components: (1) a comprehensive and accurate dataset built via a scalable GUI data pipeline (aggregating the largest and most diverse GUI-centric multimodal data set to date from open-source repositories, automated crawling, and targeted manual annotations); (2) enhanced perception and foundation building capabilities that facilitate fine-grained multimodal alignment for UI element reference, foundation building, and screen understanding; (3) a comprehensive and unified action space that supports human-agent interaction, encompassing both basic UI tasks and complex interaction intents; (4) a planning-driven inference mechanism that enables decomposition of complex user instructions into sequential actions using explicit intermediate meta-planning inference; and (5) an iterative two-step training procedure that combines large-scale continuous pre-training on 7.8 million samples with reinforcement learning fine-tuning utilizing spatially enhanced compound reward and double filtering strategies. (6) It achieves competitive performance on the proprietary Magic-RICH benchmark and over a dozen public benchmarks, demonstrating superior performance across GUI perception and agent tasks, and demonstrating strong generalization and real-world deployability in real-world mobile GUI scenarios, as detailed in Figure 1.

Takeaways, Limitations

Takeaways:
We demonstrate the superior performance of mobile GUI agents using large, diverse GUI datasets.
Effectively utilize multi-modal information to understand and manipulate UI elements.
Complex user commands can be broken down into plan-based execution.
Provides strong generalizability and deployability in real-world mobile environments.
Limitations:
Because the Magic-RICH benchmark is proprietary, further research is needed for objective comparative evaluation.
There is a possibility of unexpected errors occurring when applied in a real environment.
Possible performance degradation due to dataset bias.
There is a need to evaluate the generalizability to more complex and diverse GUI interfaces.
👍