Deployment and Application of Deep Learning Models under Computational Constraints.

Jinwen Li,Wendong Wang

2023 IEEE International Conference on Big Data (BigData)(2023)

引用 0|浏览0
暂无评分
摘要
Deep learning is a computationally demanding field. So the GPU used has a very important impact on the effectiveness of deep learning. In addition, deep learning requires huge amounts of memory. High throughput means better performance, faster iterations, and more efficient completion of experiments. Therefore, the choice of GPU basically determines the success and experience of deep learning. This paper analyzes four commonly used computer deep learning model structures, VGG, ResNet50, MobileNet, InceptionV3, and their memory requirement. At the same time, this paper also analyzes the relationship between computing power and memory of several mainstream graphics cards produced by Nvidia which are often used for deep learning, including GeForce, Tesla, and etc. Therefore, in the case of limited medical conditions, such as the inability to purchase expensive graphics cards, or do not have enough memory graphics cards, an existing suitable computer model can be found to help medical personnel to perform gland image segmentation, so as to achieve a more efficient, fast and economical scheme.
更多
查看译文
关键词
Deep Learning,GPU,Graphics cards,Video Memory,Computational Constraint
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要