This paper presents a novel extension of robust control theory that explicitly addresses uncertainty in the gradient of the value function, which is common in applications such as reinforcement learning where the value function is approximated. By formulating a zero-sum dynamic game in which both the system dynamics and the gradient of the value function are perturbed by an adversary, we derive a new highly nonlinear partial differential equation, the Hamilton-Jacobi-Bellman-Isaacs equation (GU-HJBI), which includes gradient uncertainty. We establish the adequacy of the GU-HJBI by proving the principle of comparison for viscous solutions under uniformly elliptic conditions. An analysis of the linear-quadratic (LQ) case provides the important insight that the conventional quadratic value function assumption fails for non-zero gradient uncertainty, fundamentally changing the problem structure. We characterize the non-polynomial correction to the value function and the resulting nonlinearity of the optimal control law using formal perturbation analysis, and verify it through numerical studies. Finally, we bridge theory and practice by proposing a novel gradient uncertainty robust agent-critic (GURAC) algorithm along with empirical studies on training stabilization. This work opens new directions for robust control, which has important implications for fields including reinforcement learning and computational finance, where function approximation is common.