基本信息
浏览量:0
职业迁徙
个人简介
Crafting next-generation foundation models that are not only powerful and trustworthy but also customizable and efficient, catering to a diverse range of existing and emerging applications. Current research focuses on investigating ways to enhance training signals and related training techniques in order to enable smaller LLMs to develop advanced reasoning capabilities usually seen in Large Foundation Models.
研究兴趣
论文共 10 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
CoRR (2024)
引用0浏览0EI引用
0
0
arxiv(2024)
引用0浏览0引用
0
0
Arindam Mitra,Luciano Del Corro,Shweti Mahajan, Andrés Codas, Clarisse Simões,Sahaj Agrawal,Xuxi Chen,Anastasia Razdaibiedina,Erik Jones,Kriti Aggarwal,Hamid Palangi,Guoqing Zheng,
CoRR (2023)
引用0浏览0EI引用
0
0
CoRR (2023)
引用0浏览0EI引用
0
0
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) (2022)
引用0浏览0引用
0
0
THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCEno. 01 (2019): 3003-3010
arXiv: Computation and Language (2019)
引用4浏览0EI引用
4
0
semanticscholar(2019)
引用0浏览0引用
0
0
EasyChair preprintno. 3-4 (2018): 623-637
AAAI'16: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligencepp.2779-2785, (2016)
引用104浏览0EI引用
104
0
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn