Memory-efficient parallel computation of tensor and matrix products for big tensor decomposition

Pacific Grove, CA(2014)

引用 35|浏览29
暂无评分
摘要
Low-rank tensor decomposition has many applications in signal processing and machine learning, and is becoming increasingly important for analyzing big data. A significant challenge is the computation of intermediate products which can be much larger than the final result of the computation, or even the original tensor. We propose a scheme that allows memory-efficient in-place updates of intermediate matrices. Motivated by recent advances in big tensor decomposition from multiple compressed replicas, we also consider the related problem of memory-efficient tensor compression. The resulting algorithms can be parallelized, and can exploit but do not require sparsity.
更多
查看译文
关键词
Big Data,data analysis,mathematics computing,matrix decomposition,parallel algorithms,tensors,big data analysis,big tensor decomposition,compressed replicas,low-rank tensor decomposition,machine learning,matrix products,memory-efficient parallel computation,signal processing,tensor products
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要