This paper presents a novel quadratic binary optimization (QBO) model for quantified neural network training using quantum computing. It allows the use of arbitrary activation functions and loss functions through spline interpolation, and introduces the forward-in-progress (FIP) method to discretize the activation function into linear subintervals to solve the problems of nonlinearity and multilayer complex structures. This allows the optimization of complex nonlinear functions using quantum computers while maintaining the universal approximation properties of neural networks, thereby expanding the scope of applications in the field of artificial intelligence. From an optimization perspective, the sample complexity of the empirical risk minimization problem is derived, and theoretical upper bounds on the approximation error and the required number of Ising spins are presented. To solve the problem of multiple constraints, which is a major challenge in solving large-scale quadratic constrained binary optimization (QCBO) models, the quantum conditional gradient descent (QCGD) algorithm is used to directly solve the QCBO problem. The convergence of QCGD is proven under the randomness of the target value and the quantum oracle with bounded distribution, and under the constraint of limited precision of the coefficient matrix, and an upper bound on the solution time of the QCBO solution process is presented. Experimental results using the Consistent Iaser Machine (CIM) achieved 94.95% accuracy with 1.1-bit precision on the Fashion MNIST classification task.