基本信息
浏览量:15
职业迁徙
个人简介
My focus is on deep learning for human language, especially automatic speech recognition (ASR) and natural language processing (NLP): making these faster, cheaper, and more accessible for the long tail of low-resource languages and domain-specific uses. My contributions to AWS AI’s cloud services include production-scale language modeling, speech-to-text adaptation, and fast acoustic architectures. My recent publications center on
Large pretrained language models: Masked LM Scoring · Unsupervised Bitext + NMT
Non-autoregressive end-to-end ASR: Self-Attention + CTC · Align-Refine
Low- and zero-resource crosslinguality: Transformers without Tears · Don’t Use English Dev
Representation learning for speech audio: BERTphone · DeCoAR,
Large pretrained language models: Masked LM Scoring · Unsupervised Bitext + NMT
Non-autoregressive end-to-end ASR: Self-Attention + CTC · Align-Refine
Low- and zero-resource crosslinguality: Transformers without Tears · Don’t Use English Dev
Representation learning for speech audio: BERTphone · DeCoAR,
研究兴趣
论文共 22 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
Eliya Nachmani, Alon Levkovitch,Julián Salazar, Chulayutsh Asawaroengchai,Soroosh Mariooryad, RJ Skerry-Ryan, Michelle Tadmor Ramanovich
arXiv (Cornell University) (2023)
Conference of the European Chapter of the Association for Computational Linguistics (2023): 2239-2256
引用1浏览0EI引用
1
0
58TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2020) (2020): 2699-2712
加载更多
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn