In this paper, we propose a novel method for solving high-dimensional nonconvex Hamilton-Jacobi-Isaacs (HJI) equations. The method uses a mesh-free policy iteration framework that combines classical dynamic programming and physical information neural networks (PINNs). It is applicable to HJI equations arising from stochastic differentiation games and robust control, and it iterates through solving second-order linear partial differential equations under fixed feedback policies and updating the control through point-wise min-max optimization with automatic differentiation. We prove that the value function iteration locally and uniformly converges to a unique viscous solution of the HJI equations under the standard Lipschitz conditions and the uniform elliptic conditions. We establish the iso-Lipschitz regularity of the iteration without requiring the convexity of the Hamiltonian, thereby ensuring provably stable and convergent results. We demonstrate the accuracy and scalability of the method through numerical experiments on stochastic path planning games with two-dimensional moving obstacles and publisher-subscriber differentiation games with five- and ten-dimensional anisotropic noise. The proposed method outperforms the direct PINN solver, providing a smoother value function and lower residuals.