AI-generated text detection is becoming increasingly important to prevent misuse of LLMs in education, business compliance, journalism, and social media. Previous detectors often rely on token-level likelihoods or opaque black-box classifiers, which are vulnerable to high-quality generation and poor interpretability. In this study, we propose DivEye, a novel detection framework that captures how unpredictability varies across text using surprisal-based features. Motivated by the observation that human-authored text exhibits richer variability in lexical and structural unpredictability than LLM output, DivEye captures this signal through a set of interpretable statistical features. The proposed method outperforms existing zero-shot detectors by up to 33.2% and is competitive with fine-tuned baselines across multiple benchmarks. DivEye is robust against paraphrasing and adversarial attacks, generalizes well across domains and models, and improves the performance of existing detectors by up to 18.7% when used as an auxiliary signal. Beyond detection, DivEye provides interpretable insights into why text is flagged, pointing to rhythmic unpredictability as a powerful and understudied signal for LLM detection.