Computing Krylov iterates in the time of matrix multiplication

CoRR(2024)

引用 0|浏览1
暂无评分
摘要
Krylov methods rely on iterated matrix-vector products A^k u_j for an n× n matrix A and vectors u_1,…,u_m. The space spanned by all iterates A^k u_j admits a particular basis – the maximal Krylov basis – which consists of iterates of the first vector u_1, Au_1, A^2u_1,…, until reaching linear dependency, then iterating similarly the subsequent vectors until a basis is obtained. Finding minimal polynomials and Frobenius normal forms is closely related to computing maximal Krylov bases. The fastest way to produce these bases was, until this paper, Keller-Gehrig's 1985 algorithm whose complexity bound O(n^ωlog(n)) comes from repeated squarings of A and logarithmically many Gaussian eliminations. Here ω>2 is a feasible exponent for matrix multiplication over the base field. We present an algorithm computing the maximal Krylov basis in O(n^ωloglog(n)) field operations when m ∈ O(n), and even O(n^ω) as soon as m∈ O(n/log(n)^c) for some fixed real c>0. As a consequence, we show that the Frobenius normal form together with a transformation matrix can be computed deterministically in O(n^ωloglog(n)^2), and therefore matrix exponentiation A^k can be performed in the latter complexity if log(k) ∈ O(n^ω-1-ε), for ε>0. A key idea for these improvements is to rely on fast algorithms for m× m polynomial matrices of average degree n/m, involving high-order lifting and minimal kernel bases.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要