[Extended Abstract] Benchmarking AI-methods on Heterogeneous Hardware Resources

semanticscholar(2020)

引用 0|浏览0
暂无评分
摘要
Artificial intelligence (AI) is considered as a key enabling technology to address various challenging problems, like steering self-driving cars or acting as intelligent opponents in complex computer games. The increasing capabilities of AI also requires more powerful compute resources, e.g., for training neural networks, graphical processing units (GPU) are utilized as they outperform traditional processor architectures (CPU). In recent time, also further hardware architectures such as tensor processing units (TPU) are applied, in particular to neural networks. Moreover, rewirable processors like fieldprogrammable gate arrays (FPGA) seem to provide a good tradeoff between performance and energy use. While GPUs and TPUs can be programmed through specific software libraries, FPGA-programming is more complex as it requires rather deep hardware knowledge as well as a different development approach. For developing AI-enabled hardwareaccelerated applications, which in the extreme case even rely on several hardware architectures, the question arises, which of these architectures shall be applied for which AI method to achieve the best performance. Besides performance measures such as throughput, intensity or latency, also the energy consumption is relevant, in particular to trade-off AI benefits and increasing AI usage with their impact on the environment. Ultimately, also development and integration efforts must be considered.
更多
查看译文
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要