A1 is an agent-execution-based system that transforms any LLM into an end-to-end exploit generator. A1 provides the agent with six domain-specific tools that enable autonomous vulnerability discovery without any handcrafted heuristics. The agent can flexibly utilize these tools to understand smart contract behavior, generate exploit strategies, test them on blockchain state, and improve its approach based on execution feedback. All outputs are specifically validated to eliminate false positives. In an evaluation on 36 real-world vulnerable contracts from Ethereum and Binance Smart Chain, it achieved a success rate of 62.96% (17 out of 27) on the VERITE benchmark. In addition to the VERITE dataset, A1 identified nine additional vulnerable contracts, five of which occurred after the training cutoff date of the strongest model. In the 26 successful cases, A1 extracted up to $8.59 million per case, for a total of $9.33 million. Analyzing the iteration-by-iteration performance across 432 experiments across six LLMs, we demonstrate diminishing returns with average marginal profits of +9.7%, +3.7%, +5.1%, and +2.8% at iterations 2 and 5, respectively, at costs of $0.01 and $3.59 per experiment. Monte Carlo analyses of 19 historical attacks show success probabilities of 85.9% to 88.8% with no detection delay. We investigate whether deploying A1 as a continuous on-chain scanning system benefits attackers or defenders. OpenAI’s o3-pro model remains profitable with a vulnerability encounter rate of 0.100% up to a 30-day scanning delay, while faster models require an encounter rate of 1.000% or higher to break-even. These results demonstrate a disturbing asymmetry where attackers achieve on-chain scanning profitability at an exploit value of $6,000, while defenders require $60,000, at a vulnerability incidence of 0.1%, raising a fundamental question of whether AI agents inevitably prefer exploits over defenses.