Automatic classification of experimental models in biomedical literature to support searching for alternative methods to animal experiments

Mariana Neves, Antonina Klippert, Fanny Knöspel,Juliane Rudeck,Ailine Stolz,Zsofia Ban, Markus Becker,Kai Diederich,Barbara Grune,Pia Kahnau, Nils Ohnesorge, Johannes Pucher,Gilbert Schönfelder,Bettina Bert,Daniel Butzke

Research Square (Research Square)(2022)

引用 0|浏览0
暂无评分
摘要
Abstract Background: European Union legislature requires replacement of animal experiments with alternative methods, whenever such methods are suitable to reach the intended scientific objective. However, searching for alternative methods in the scientific literature is a time-consuming task that requires careful screening of an enormously large number of experimental biomedical publications. The identification of potentially relevant methods, e.g. organ or cell culture models, or computer simulations, can be supported with text mining tools specifically built for this purpose. Such tools are trained (or fine tuned) on relevant data sets labeled by human experts. Methods: We developed the GoldHamster corpus, composed of 1,600 PubMed (Medline) abstracts, in which we manually identified the used experimental model according to a set of eight labels, namely: “in vivo”, “organs”, “primary cells”, “immortal cell lines”, “invertebrates”, “humans”, “in silico” and “other” (models). We recruited 13 annotators with expertise in the biomedical domain and assigned each article to two individuals. Three additional rounds of annotation aimed at improving the quality of the annotations with disagreements in the first round. Furthermore, we conducted various machine learning experiments based on supervised learning to evaluate the suitability of the corpus for our classification task. Results: We obtained more than 7,000 abstract-level annotations for the above labels. The inter-annotator agreement (kappa coefficient) varied among labels, and ranged from 0.63 (for “others”) to 0.82 (for “invertebrates”), with an overall score of 0.74. The best-performing machine learning experiment used the BioBERT pre-trained model with fine-tuning to our corpus, which gained an overall f-score of 0.82. Conclusions: We obtained a high agreement for most of the labels, and our evaluation demonstrated, that our corpus is suitable for training reliable predictive models for automatic classification of biomedical literature according to the used experimental models. Our “Smart feature-based interactive” search tool (SMAFIRA) will employ this classifier for supporting the retrieval of alternative methods to animal experiments. The corpus and the source code will be made available.
更多
查看译文
关键词
experimental models,biomedical literature,automatic classification,alternative methods
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要