GQHAN: A Grover-inspired Quantum Hard Attention Network
CoRR(2024)
摘要
Numerous current Quantum Machine Learning (QML) models exhibit an inadequacy
in discerning the significance of quantum data, resulting in diminished
efficacy when handling extensive quantum datasets. Hard Attention Mechanism
(HAM), anticipated to efficiently tackle the above QML bottlenecks, encounters
the substantial challenge of non-differentiability, consequently constraining
its extensive applicability. In response to the dilemma of HAM and QML, a
Grover-inspired Quantum Hard Attention Mechanism (GQHAM) consisting of a
Flexible Oracle (FO) and an Adaptive Diffusion Operator (ADO) is proposed.
Notably, the FO is designed to surmount the non-differentiable issue by
executing the activation or masking of Discrete Primitives (DPs) with Flexible
Control (FC) to weave various discrete destinies. Based on this, such discrete
choice can be visualized with a specially defined Quantum Hard Attention Score
(QHAS). Furthermore, a trainable ADO is devised to boost the generality and
flexibility of GQHAM. At last, a Grover-inspired Quantum Hard Attention Network
(GQHAN) based on QGHAM is constructed on PennyLane platform for Fashion MNIST
binary classification. Experimental findings demonstrate that GQHAN adeptly
surmounts the non-differentiability hurdle, surpassing the efficacy of extant
quantum soft self-attention mechanisms in accuracies and learning ability. In
noise experiments, GQHAN is robuster to bit-flip noise in accuracy and
amplitude damping noise in learning performance. Predictably, the proposal of
GQHAN enriches the Quantum Attention Mechanism (QAM), lays the foundation for
future quantum computers to process large-scale data, and promotes the
development of quantum computer vision.
更多查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要