基本信息
浏览量:2
职业迁徙
个人简介
The natural language processing field has seen great advances through the introduction of pre-trained language models, like BERT. At DSV, we have successfully applied these language models for medical applications by training on large amounts of electronic health record data.
One of the main reasons for the success of these language models is that they are very large, and trained on enormous corpora. Because of this, the success of these models comes with an important drawback: they have a tendency to leak information about their training data. My research tackles this issue, and my goal is to find ways of creating models which preserve the privacy of people in the training data.
I am currently working on to what extent masked language models (such as BERT) leak sensitive information about their training data. Since BERT-style models are very common, especially for lesser-resourced languages, this could have significant privacy implications.
研究兴趣
论文共 8 篇作者统计合作学者相似作者
按年份排序按引用量排序主题筛选期刊级别筛选合作者筛选合作机构筛选
时间
引用量
主题
期刊级别
合作者
合作机构
AMIA ... Annual Symposium proceedings. AMIA Symposium (2024): 465-473
引用0浏览0WOS引用
0
0
NoDaLiDapp.318-323, (2023)
引用0浏览0EI引用
0
0
Research Square (Research Square) (2023)
引用0浏览0引用
0
0
International Conference on Language Resources and Evaluation (LREC)pp.4245-4252, (2022)
引用16浏览0EI引用
16
0
Linköping Electronic Conference Proceedings Proceedings of the 18th Scandinavian Conference on Health Informatics (2022)
HUMAN@AAAI Fall Symposium (2021)
引用0浏览0EI引用
0
0
semanticscholar(2020)
引用0浏览0引用
0
0
作者统计
合作学者
合作机构
D-Core
- 合作者
- 学生
- 导师
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn