This paper presents WebProber, a web testing framework based on large-scale language models (LLMs) and AI agents, to effectively identify usability issues in websites. Unlike existing approaches that focus on code coverage and load testing, WebProber navigates and interacts with websites in a manner similar to that of real users, identifying bugs and usability issues and generating human-readable reports. In a case study of 120 academic websites, WebProber identified 29 usability issues that existing tools missed. This demonstrates the potential of AI agent-based testing and suggests directions for the development of next-generation user-centric testing frameworks.