This is a page that curates AI-related papers published worldwide. All content here is summarized using Google Gemini and operated on a non-profit basis. Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.
Is Quantum Optimization Ready? An Effort Towards Neural Network Compression using Adiabatic Quantum Computing
Created by
Haebom
Author
Zhehui Wang, Benjamin Chen Ming Choong, Tian Huang, Daniel Gerlinghoff, Rick Siow Mong Goh, Cheng Liu, Tao Luo
Outline
This paper presents a method for achieving efficient compression (fine-tuned pruning-quantization) of deep neural networks (DNNs) using quantum optimization, specifically quantum annealing (AQC). Optimizing large-scale DNN models is becoming increasingly challenging. This study modifies existing heuristic techniques to reformulate the model compression problem as a binary unconstrained quadratic optimization (QUBO) problem and solves it using a commercial quantum annealing device. Experimental results demonstrate that AQC is more time-efficient and excels at finding global optima than classical algorithms such as genetic algorithms or reinforcement learning, demonstrating its potential for effective compression of real-world DNN models.
Takeaways, Limitations
•
Takeaways:
◦
We propose that quantum annealing is a promising method for efficient compression of large-scale DNN models.
◦
We experimentally demonstrate that AQC is more time-efficient and effective in finding global optima than classical algorithms.
◦
We present an effective way to reformulate the DNN compression problem into a QUBO problem.
•
Limitations:
◦
Currently, there is a high reliance on commercial quantum annealing devices, and their performance may vary depending on the advancement of quantum computing technology.
◦
The scope of the research is limited to a specific type of deep neural network (DNN) and compression technique (fine-tuned pruning-quantization).
◦
Further research is needed to explore the applicability and generalization performance to more diverse and complex DNN models.