Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

AI Agents for Web Testing: A Case Study in the Wild

Created by
  • Haebom

Author

Naimeng Ye, Xiao Yu, Ruize Xu, Tianyi Peng, Zhou Yu

Outline

This paper presents WebProber, a web testing framework based on large-scale language models (LLMs) and AI agents, to effectively identify usability issues in websites. Unlike existing approaches that focus on code coverage and load testing, WebProber navigates and interacts with websites in a manner similar to that of real users, identifying bugs and usability issues and generating human-readable reports. In a case study of 120 academic websites, WebProber identified 29 usability issues that existing tools missed. This demonstrates the potential of AI agent-based testing and suggests directions for the development of next-generation user-centric testing frameworks.

Takeaways, Limitations

Takeaways:
AI agent-based web testing can uncover usability issues more effectively than traditional methods.
WebProber mimics real user behavior, enabling more realistic testing.
Automated testing can reduce development time and costs.
It presents a new direction in the development of user-centric testing frameworks.
Limitations:
WebProber is a prototype and requires more advanced technology and features.
Further validation of generalizability across diverse website environments is needed.
Due to limitations of AI agents, they may not be able to detect all usability issues.
The case study size is limited, and further research on different types of websites is needed.
👍