Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Quantum-Classical Hybrid Quantized Neural Network

Created by
  • Haebom

Author

Wenxin Li, Chuan Wang, Hongdong Zhu, Qi Gao, Yin Ma, Hai Wei, Kai Wen

Outline

This paper presents a novel quadratic binary optimization (QBO) model for quantified neural network training using quantum computing. It allows the use of arbitrary activation functions and loss functions through spline interpolation, and introduces the forward-in-progress (FIP) method to discretize the activation function into linear subintervals to solve the problems of nonlinearity and multilayer complex structures. This allows the optimization of complex nonlinear functions using quantum computers while maintaining the universal approximation properties of neural networks, thereby expanding the scope of applications in the field of artificial intelligence. From an optimization perspective, the sample complexity of the empirical risk minimization problem is derived, and theoretical upper bounds on the approximation error and the required number of Ising spins are presented. To solve the problem of multiple constraints, which is a major challenge in solving large-scale quadratic constrained binary optimization (QCBO) models, the quantum conditional gradient descent (QCGD) algorithm is used to directly solve the QCBO problem. The convergence of QCGD is proven under the randomness of the target value and the quantum oracle with bounded distribution, and under the constraint of limited precision of the coefficient matrix, and an upper bound on the solution time of the QCBO solution process is presented. Experimental results using the Consistent Iaser Machine (CIM) achieved 94.95% accuracy with 1.1-bit precision on the Fashion MNIST classification task.

Takeaways, Limitations

Takeaways:
A novel approach to quantitative neural network training using quantum computing.
Arbitrary activation functions and loss functions can be used.
Quantum computer-based optimization of nonlinear functions is possible.
Efficient solution of QCBO problem using QCGD algorithm.
Achieving high accuracy on Fashion MNIST.
Limitations:
The convergence proof of the QCGD algorithm is only valid under certain conditions.
Scalability to large-scale problems and performance evaluation on real quantum computers are needed.
There is a problem of adjusting the penalty coefficient of the penalty method for constraint processing.
Uses limited precision of 1.1-bit precision.
👍