Solving “large” dense matrix problems on multi-core processors

Rome(2009)

引用 6|浏览0
暂无评分
摘要
Few realize that for large matrices dense matrix computations achieve nearly the same performance when the matrices are stored on disk as when they are stored in a very large main memory. Similarly, few realize that, given the right programming abstractions, coding Out-of-Core (OOC) implementations of dense linear algebra operations (where data resides on disk and has to be explicitly moved in and out of main memory) is no more difficult than programming high-performance implementations for the case where the matrix is in memory. Finally, few realize that on a contemporary eight core architecture one can solve a 100,000 times 100,000 dense symmetric positive definite linear system in about an hour. Thus, for problems that used to be considered large, it is not necessary to utilize distributed-memory architectures with massive memories if one is willing to wait longer for the solution to be computed on a fast multithreaded architecture like an SMP or multi-core computer. This paper provides evidence in support of these claims.
更多
查看译文
关键词
symmetric positive definite linear system,linear algebra operations,large main memory,multi-core processor,microprocessor chips,fast multithreaded architecture,dense linear algebra operation,large matrix,distributed memory systems,storage management,main memory,out-of-core implementations,matrix algebra,positive definite linear system,multi-threading,core architecture,distributed-memory architecture,multithreaded architecture,distributed-memory architectures,dense matrix problem,disc storage,dense matrix problems,dense matrix computation,massive memory,massive memories,multicore processors,multi core processor,symmetric matrices,linear system,distributed computing,computer architecture,linear programming,linear systems,multicore processing,linear algebra,multi threading
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要