This paper develops and experiments with a design-probing LLM agent called Codellaborator to evaluate the impact of proactive, predictive assistance from AI agents on programming efficiency during the programming process. Codellaborator initiates programming assistance based on editor activity and task context. We compare and analyze the pros and cons of AI assistance in three interface variations: prompt-only, proactive, and proactive agents with presence and context. Experiments with 18 participants revealed that proactive agents improved efficiency compared to prompt-only approaches, but also introduced workflow disruption. However, presence indicators and interaction context support mitigated these disruptions and enhanced users' awareness of the AI process. In conclusion, our study contributes to the design exploration and evaluation of proactive AI systems and suggests design implications for AI-integrated programming workflows. We also highlight tradeoffs between user control, ownership, and code comprehension, suggesting that proactive assistance should be tailored to the programming process.