Daily Arxiv

This is a page that curates AI-related papers published worldwide.
All content here is summarized using Google Gemini and operated on a non-profit basis.
Copyright for each paper belongs to the authors and their institutions; please make sure to credit the source when sharing.

Vectorized Attention with Learnable Encoding for Quantum Transformer

Created by
  • Haebom

Author

Ziqing Guo, Ziwen Pan, Alex Khan, Jan Balewski

Outline

This paper presents a method for embedding classical data in Hilbert space using vectorized quantum block encoding, thereby improving the efficiency of quantum models such as the Quantum Transformer (QT), which replaces classical self-attention mechanisms with quantum circuit simulations. Conventional QTs rely on deeply parameterized quantum circuits (PQCs), which are susceptible to QPU noise and suffer from performance degradation. In this paper, we propose a vectorized quantum transformer (VQT), which enables efficient training via a vectorized nonlinear quantum encoder and supports the computation of an ideal masked attention matrix via quantum approximate simulation. This achieves shot-efficient, gradient-free quantum circuit simulation (QCS) with reduced classical sampling overhead. Comparing the accuracy of quantum circuit simulations on IBM and IonQ, and benchmarking natural language processing tasks on IBM's state-of-the-art high-fidelity Kingston QPU, we demonstrate competitive results. This noise-robust, medium-scale, quantum-friendly VQT approach presents a novel architecture for end-to-end machine learning in quantum computing.

Takeaways, Limitations

Takeaways:
Implementation of efficient training and shot-efficient quantum circuit simulation using vectorized quantum encoders.
Support for computing ideal masked attention matrices through quantum approximate simulations.
Reduced classical sampling overhead.
We present a noise-robust, mid-scale, quantum-friendly architecture.
Demonstrating competitive natural language processing performance on IBM and IonQ QPUs.
Limitations:
The performance of the proposed VQT may be affected by the performance and noise level of the QPU used.
Further application and performance evaluation for actual large-scale natural language processing tasks are needed.
Further research is needed on the generalizability of vectorized quantum encoders and their extensibility to other quantum models.
👍