GenAI providers use watermarking to verify that content was generated by their models. Watermarks are hidden signals in the content, and their presence can be detected using a secret watermark key. A key security threat is spoofing attacks, where an attacker can embed a provider's watermark into content not generated by the provider, damaging their reputation and undermining trust. Existing defenses prevent spoofing by embedding multiple watermarks with different keys into the same content, but this can degrade model utility. However, spoofing remains a threat if the attacker can collect a sufficient number of watermarked samples. This paper proposes a provably robust defense against spoofing attacks, regardless of the number of watermarked content collected, provided the attacker cannot easily distinguish watermarks with different keys. The proposed approach does not further degrade model utility. For each query, the watermark key selection is randomized, and content is considered authentic only if a watermark is detected with exactly one key. While focused on image and text modes, the proposed defense is mode-agnostic, treating the underlying watermarking method as a black box. The proposed method provably limits the attacker's success rate, reducing it from near-perfect to a mere 2% with negligible computational overhead.