Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology;OmniML
His research focuses on efficient deep learning computing. He proposed “deep compression” technique that can reduce neural network size by an order of magnitude without losing accuracy, and the hardware implementation “efficient inference engine” that first exploited pruning and weight sparsity in deep learning accelerators. His team’s work on hardware-aware neural architecture search (ProxylessNAS, Once-for-All Network (OFA), MCUNet) was integrated in Facebook, Amazon, Microsoft, Intel, SONY, received the first place in six low-power computer vision contest awards in flagship AI conferences. Song received Best Paper awards at ICLR and FPGA, multiple faculty awards from Amazon, SONY, Facebook, NVIDIA and Samsung. Song was named “35 Innovators Under 35” by MIT Technology Review for his contribution on “deep compression” technique that “lets powerful artificial intelligence (AI) programs run more efficiently on low-power mobile devices.” Song received the NSF CAREER Award for “efficient algorithms and hardware for accelerated machine learning” and the IEEE “AIs 10 to Watch: The Future of AI” award.
论文共 188 篇作者统计合作学者相似作者
, ,, ,, ,, ,
, ,,, ,, , ,,,
, ,, ,,,