This paper introduces VIDEE, a system that enables even novice data analysts to perform advanced text analytics using intelligent agents. Based on a human-agent collaborative workflow, VIDEE consists of (1) a decomposition phase utilizing a Monte-Carlo Tree Search algorithm that incorporates human feedback; (2) an execution phase that generates an executable text analytics pipeline; and (3) an evaluation phase that integrates LLM-based evaluation and visualization to support user validation of the execution results. We evaluate the effectiveness of VIDEE through two quantitative experiments and analyze common agent errors. Furthermore, we demonstrate the system's usability and analyze user behavior patterns through a user study with participants with varying levels of NLP and text analytics experience. The findings suggest a design Takeaways for human-agent collaboration, validate the practicality of VIDEE for non-expert users, and inform future improvements to intelligent text analytics systems.