This paper introduces and evaluates Codellaborator, a design-probing LLM agent that initiates programming assistance based on editor activity and task context during a programming task. We explore trade-offs between increasingly prominent AI assistance across three interface variations: prompt-only, proactive agent, and proactive agent with presence and context. Experiments with 18 participants show that proactive agents are more efficient than prompt-only approaches, but can be disruptive to the workflow. However, presence indicators and interaction context support reduce disruption and improve users’ awareness of the AI process. We highlight Codellaborator’s trade-offs with respect to user control, ownership, and code comprehension, and emphasize the need to apply proactiveness to the programming process. This study contributes to the design exploration and evaluation of proactive AI systems, and suggests design implications for AI-integrated programming workflows.