This paper presents a novel approach to address security vulnerabilities in Large-Scale Language Model (LLM) agents, particularly the risk of prompt injection attacks, by treating agent execution traces as structured programs. We propose a program analysis framework, AgentArmor, which transforms agent traces into graph intermediate representations (CFGs, DFGs, PDGs, etc.) and enforces security policies through a type system. AgentArmor consists of three main components: a graph generator, a property registry, and a type system. By representing agent behavior as a structured program, it enables program analysis for sensitive data flows, trust boundaries, and policy violations. Evaluation results using the AgentDojo benchmark demonstrate that AgentArmor reduces ASR to 3% and limits utility degradation to 1%.