This paper proposes SEAgent, a framework for computer-enabled agents (CUAs) that learn and evolve autonomously in new software environments without human intervention. Based on large-scale vision-language models (LVLMs), SEAgent learns new software through trial-and-error experiential learning. It learns by performing automatically generated tasks that progress from simple to complex, utilizing a World State Model for detailed step-by-step path evaluation and a Curriculum Generator to generate increasingly diverse and challenging tasks. The agent's policy is updated through adversarial imitation for failed actions and Group Relative Policy Optimization (GRPO) for successful actions. Furthermore, we develop a robust generalizing CUA capable of continuous autonomous evolution through an expert-generalization strategy that integrates the empirical insights of specialized agents. We validate the effectiveness of SEAgent on five new software environments within OS-World, improving the success rate by 23.2% (from 11.3% to 34.5%) compared to UI-TARS, an existing open-source CUA.