WeChat Mini Program
Old Version Features

Identifying Breast Cancer Recurrence in Administrative Data: Algorithm Development and Validation

Current oncology(2022)

Ontario Hlth

Cited 6|Views13
Abstract
Breast cancer recurrence is an important outcome for patients and healthcare systems, but it is not routinely reported in cancer registries. We developed an algorithm to identify patients who experienced recurrence or a second case of primary breast cancer (combined as a “second breast cancer event”) using administrative data from the population of Ontario, Canada. A retrospective cohort study design was used including patients diagnosed with stage 0-III breast cancer in the Ontario Cancer Registry between 1 January 2009 and 31 December 2012 and alive six months post-diagnosis. We applied the algorithm to healthcare utilization data from six months post-diagnosis until death or 31 December 2013, whichever came first. We validated the algorithm’s diagnostic accuracy against a manual patient record review (n = 2245 patients). The algorithm had a sensitivity of 85%, a specificity of 94%, a positive predictive value of 67%, a negative predictive value of 98%, an accuracy of 93%, a kappa value of 71%, and a prevalence-adjusted bias-adjusted kappa value of 85%. The second breast cancer event rate was 16.5% according to the algorithm and 13.0% according to manual review. Our algorithm’s performance was comparable to previously published algorithms and is sufficient for healthcare system monitoring. Administrative data from a population can, therefore, be interpreted using new methods to identify new outcome measures.
More
Translated text
Key words
breast neoplasms,neoplasm recurrence,local,recurrence,algorithms,outcome assessment,healthcare,predictive value of tests,diagnostic techniques and procedures,prevalence,humans,cohort studies
求助PDF
上传PDF
Bibtex
AI Read Science
AI Summary
AI Summary is the key point extracted automatically understanding the full text of the paper, including the background, methods, results, conclusions, icons and other key content, so that you can get the outline of the paper at a glance.
Example
Background
Key content
Introduction
Methods
Results
Related work
Fund
Key content
  • Pretraining has recently greatly promoted the development of natural language processing (NLP)
  • We show that M6 outperforms the baselines in multimodal downstream tasks, and the large M6 with 10 parameters can reach a better performance
  • We propose a method called M6 that is able to process information of multiple modalities and perform both single-modal and cross-modal understanding and generation
  • The model is scaled to large model with 10 billion parameters with sophisticated deployment, and the 10 -parameter M6-large is the largest pretrained model in Chinese
  • Experimental results show that our proposed M6 outperforms the baseline in a number of downstream tasks concerning both single modality and multiple modalities We will continue the pretraining of extremely large models by increasing data to explore the limit of its performance
Upload PDF to Generate Summary
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Data Disclaimer
The page data are from open Internet sources, cooperative publishers and automatic analysis results through AI technology. We do not make any commitments and guarantees for the validity, accuracy, correctness, reliability, completeness and timeliness of the page data. If you have any questions, please contact us by email: report@aminer.cn
Chat Paper
Summary is being generated by the instructions you defined