AI helps you reading Science

AI generates interpretation videos

AI extracts and analyses the key points of the paper to generate videos automatically


pub
Go Generating

AI Traceability

AI parses the academic lineage of this thesis


Master Reading Tree
Generate MRT

AI Insight

AI extracts a summary of this paper


Weibo:
We demonstrated the utility of using device-derived features to detect cognitive impairment in the small cohort of 31 symptomatics and 82 healthy controls included in the analysis, presenting a model achieving Area Under the ROC Curve=0.80 using device-derived features and demogr...

Developing Measures of Cognitive Impairment in the Real World from Consumer-Grade Multimodal Sensor Streams

pp.2145-2155 (2019)

Cited by: 45|Views315
EI

Abstract

The ubiquity and remarkable technological progress of wearable consumer devices and mobile-computing platforms (smart phone, smart watch, tablet), along with the multitude of sensor modalities available, have enabled continuous monitoring of patients and their daily activities. Such rich, longitudinal information can be mined for physiolo...More

Code:

Data:

0
Introduction
  • An estimated 5.7 million Americans and 46.8 million people worldwide live with dementia with a global cost of approximately $1 trillion [32].
  • Despite this prevalence, early diagnosis is a clinical challenge and is time consuming.
  • Efforts to reduce these limitations have focused on computerization of assessments, such as the CogState CBB [29], computerized tests are still limited [3]
Highlights
  • An estimated 5.7 million Americans and 46.8 million people worldwide live with dementia with a global cost of approximately $1 trillion [32]
  • Area Under the ROC Curve (AUROC), which is optimized by ranking positive examples ahead of negative examples, is an appropriate metric of success for the intended application of targeting interventions
  • The AUROC of the model increased to 0.804 (AUPRC = 0.701) when demographics were added to the feature set
  • We demonstrated the utility of using device-derived features to detect cognitive impairment in the small cohort of 31 symptomatics and 82 healthy controls included in the analysis, presenting a model achieving AUROC=0.80 using device-derived features and demographic data
  • Other digital assessments to discriminate between Alzheimer’s disease (AD) and healthy controls have been tested, including typing speed, speech and language, eye movements, and pupillary reflex [23]
  • We explored using TICC [18], which was recently adopted on another study on AD dementia using actigraphy data [25], but found that it was too sensitive to missing data to be applied to the current data set
Methods
  • The authors chose modeling techniques that provide direct interpretability of the results in feature space.
  • Even if methods based on representation learning that directly model outcomes from the raw time series [26] are becoming increasingly popular in select parameters with the highest mean AUC and train Bi-week.
  • 1 hyper-parameter tune 3-fold cross-validation grouped by user 3 score Bi-week AUC Participant 5 score.
  • Participant AUC the medical machine learning community, interpretability of findings, model diagnostics, and overall complexity of model developed remain largely unsolved issues [16]
Results
  • The authors measure performance using Area Under the Receiver Operating Characteristic curve (AUROC), averaged across splits.
  • The authors report Area Under the Precision-Recall Curve (AUPRC, computed as average precision over all possible recall thresholds), which is a more informative metric in the case where the emphasis is on accurate identification of the positives with a majority of negative samples [35].
  • Device-derived features alone were more precise on average than demographics alone (AUPRC=0.628 vs 0.546) in identifying symptomatic participants.
  • When comparing AUROC and AUPRC scores between the demographics-only models and the models that included devicederived features, all scores were significantly different (p<0.0001), except for the demographics vs device-derived features trained on the full cohort (p=0.2).
  • The authors repeated the training/test procedure on a dataset with randomly shuffled labels, and found that AUROC scores of biweek- and user-level models were not significantly different from a randomly performing model (AUROC 0.5)
Conclusion
  • The goal of this study was to assess the feasibility of collecting data in cognitively impaired individuals and healthy controls from multiple smart devices and to test whether the data can differentiate between these groups.
  • The authors demonstrated the utility of using device-derived features to detect cognitive impairment in the small cohort of 31 symptomatics and 82 healthy controls included in the analysis, presenting a model achieving AUROC=0.80 using device-derived features and demographic data.
  • The RADAR-AD study measures disability progression associated with AD using smart phones, wearables, and home-based sensors
Tables
  • Table1: Sources of data collected in this study, along with their sampling rates and estimated sizes. Data size estimates are reported in MB collected per participant per day. *Data sources are outside the scope of this paper
  • Table2: Summary of aggregations applied to minute-level data during feature computation. Features for the active psychomotor tasks are not reported here. (Abbreviations: TOD, time of day; IQR, inter-quartile range, pctl: percentile)
  • Table3: Summary of modeling results
  • Table4: Top 5 feature descriptions and cohort means for Healthy Controls (gray) and Symptomatics (blue)
Download tables as Excel
Reference
  • Joaquin A Anguera, Jacqueline Boccanfuso, James L Rintoul, Omar Al-Hashimi, Farhoud Faraji, Jacqueline Janowich, Eric Kong, Yudy Larraburo, Christine Rolle, and Eric Johnston. 2013. Video game training enhances cognitive control in older adults. Nature 501, 7465 (2013), 97–101.
    Google ScholarLocate open access versionFindings
  • Rabeea’h W Aslam, Vickie Bates, Yenal Dundar, Juliet Hounsome, Marty Richardson, Ashma Krishan, Rumona Dickson, Angela Boland, Joanne Fisher, Louise Robinson, et al. 2018. A systematic review of the diagnostic accuracy of automated tests for cognitive impairment. International journal of geriatric psychiatry 33, 4 (2018), 561–575.
    Google ScholarLocate open access versionFindings
  • Russell M Bauer, Grant L Iverson, Alison N Cernich, Laurence M Binder, Ronald M Ruff, and Richard I Naugle. 2012. Computerized neuropsychological assessment devices: joint position paper of the American Academy of Clinical Neuropsychology and the National Academy of Neuropsychology. The Clinical Neuropsychologist 26, 2 (2012), 177–196.
    Google ScholarLocate open access versionFindings
  • James Bergstra, Dan Yamins, and David D Cox. 2013. Hyperopt: A python library for optimizing the hyperparameters of machine learning algorithms. In Proceedings of the 12th Python in Science Conference. Citeseer, 13–20.
    Google ScholarLocate open access versionFindings
  • Andrea Bradford, Mark E Kunik, Paul Schulz, Susan P Williams, and Hardeep Singh. 2009. Missed and delayed diagnosis of dementia in primary care: prevalence and contributing factors. Alzheimer disease and associated disorders 23, 4 (2009), 306.
    Google ScholarLocate open access versionFindings
  • Zhengping Che, Sanjay Purushotham, Kyunghyun Cho, David Sontag, and Yan Liu. 2018. Recurrent neural networks for multivariate time series with missing values. Scientific reports 8, 1 (2018), 6085.
    Google ScholarLocate open access versionFindings
  • Irene Chen, Fredrik D Johansson, and David Sontag. 2018. Why Is My Classifier Discriminatory? arXiv preprint arXiv:1805.12002 (2018).
    Findings
  • Tianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. ACM, 785–794.
    Google ScholarLocate open access versionFindings
  • Clinical Trials Transformation Initiative (CTTI). 2018. CTTI Recommendations: Advancing the Use of Mobile Technologies for Data Capture & Improved Clinical Trials - Data Flow Diagram. www.ctti-clinicaltrials.org/sites/www. ctti- clinicaltrials.org/files/data- flow- diagram.pdf
    Findings
  • Bruce N Cuthbert. 2019. The PRISM project: Social withdrawal from an RDoC perspective. Neuroscience & Biobehavioral Reviews (2019), 34–37.
    Google ScholarLocate open access versionFindings
  • E Ray Dorsey, Michael V McConnell, Stanley Y Shaw, Andrew D Trister, Stephen H Friend, et al. 2017. The use of smartphones for health research. Academic Medicine 92, 2 (2017), 157–160.
    Google ScholarLocate open access versionFindings
  • Marshal F. Folstein, Susan E. Folstein, and Paul R. McHugh. 1975. "Mini-mental state": A practical method for grading the cognitive state of patients for the clinician. Journal of Psychiatric Research 12, 3 (1975), 189 – 198.
    Google ScholarLocate open access versionFindings
  • US Food and Drug Administration (FDA). 2018. FDA launches new digital tool to help capture real world data from patients to help inform regulatory decisionmaking. www.fda.gov/NewsEvents/Newsroom/FDAInBrief/ucm625228.htm
    Findings
  • US Food and Drug Administration (FDA). 2018. Framework for FDA’s Real-World Evidence Program. www.fda.gov/downloads/ScienceResearch/SpecialTopics/ RealWorldEvidence/UCM627769.pdf
    Findings
  • Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford. 2018. Datasheets for Datasets. arXiv preprint arXiv:1803.09010 (2018).
    Findings
  • Marzyeh Ghassemi, Tristan Naumann, Peter Schulam, Andrew L Beam, and Rajesh Ranganath. 2018. Opportunities in Machine Learning for Healthcare. arXiv preprint arXiv:1806.00388 (2018).
    Findings
  • Terry E Goldberg, Philip D Harvey, Keith A Wesnes, Peter J Snyder, and Lon S Schneider. 2015. Practice effects due to serial cognitive assessment: implications for preclinical Alzheimer’s disease randomized controlled trials. Alzheimer’s & Dementia: Diagnosis, Assessment & Disease Monitoring 1, 1 (2015), 103–111.
    Google ScholarLocate open access versionFindings
  • David Hallac, Sagar Vare, Stephen Boyd, and Jure Leskovec. 2017. Toeplitz inverse covariance-based clustering of multivariate time series data. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 215–223.
    Google ScholarLocate open access versionFindings
  • S. Hoops, S. Nazem, A. D. Siderowf, J. E. Duda, S. X. Xie, M. B. Stern, and D. Weintraub. 2009. Validity of the MoCA and MMSE in the detection of MCI and dementia in Parkinson disease. Neurology 73, 21 (2009), 1738–1745.
    Google ScholarLocate open access versionFindings
  • Diane Howieson. 2019. Current limitations of neuropsychological tests and assessment procedures. The Clinical Neuropsychologist 0, 0 (2019), 1–9.
    Google ScholarLocate open access versionFindings
  • Clifford R. Jack, Marilyn S. Albert, David S. Knopman, Guy M. McKhann, Reisa A. Sperling, Maria C. Carrillo, Bill Thies, and Creighton H. Phelps. 2011. Introduction to the recommendations from the National Institute on Aging-Alzheimer's Association workgroups on diagnostic guidelines for Alzheimer's disease. Alzheimer's & Dementia 7, 3 (may 2011), 257–262.
    Google ScholarLocate open access versionFindings
  • Jeffrey A Kaye, Shoshana A Maxwell, Nora Mattek, Tamara L Hayes, Hiroko Dodge, Misha Pavel, Holly B Jimison, Katherine Wild, Linda Boise, and Tracy A Zitzelberger. 2011. Intelligent systems for assessing aging changes: home-based, unobtrusive, and continuous assessment of aging. Journals of Gerontology Series B: Psychological Sciences and Social Sciences 66, suppl_1 (2011), i180–i190.
    Google ScholarLocate open access versionFindings
  • Lampros C Kourtis, Oliver B Regele, Justin M Wright, and Graham Jones. 2019. Digital biomarkers for Alzheimer’s disease: the mobile/wearable devices opportunity. NPJ Digital Medicine (2019).
    Google ScholarLocate open access versionFindings
  • C. Leurent, E. Pickering, J. Goodman, S. Duvvuri, P. He, E. Martucci, S. Kellogg, D. Purcell, J. Barakos, G. Klein, JW. Kupiec, and R. Alexander. 2016. A Randomized, Double-Blind, Placeo Controlled Trial to Study Difference in Cognitive Learning Associated with Repeated Self-administration of Remote Computer Tablet-based Application Assessing Dual Task Performance Based on Amyloid Status in Healthy Elderly Volunteers. 4 (2016), 280–281.
    Google ScholarFindings
  • Jia Li, Yu Rong, Helen Meng, Zhihui Lu, Timothy Kwok, and Hong Cheng. 2018. TATC: Predicting Alzheimer’s Disease with Actigraphy Data. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. ACM, 509–518.
    Google ScholarLocate open access versionFindings
  • Zachary C Lipton, David C Kale, Charles Elkan, and Randall Wetzel. 2015. Learning to diagnose with LSTM recurrent neural networks. arXiv preprint arXiv:1511.03677 (2015).
    Findings
  • Yun Liu, Krishna Gadepalli, Mohammad Norouzi, George E Dahl, Timo Kohlberger, Aleksey Boyko, Subhashini Venugopalan, Aleksei Timofeev, Philip Q Nelson, Greg S Corrado, et al. 2017. Detecting cancer metastases on gigapixel pathology images. arXiv preprint arXiv:1703.02442 (2017).
    Findings
  • Scott M Lundberg and Su-In Lee. 2017. A unified approach to interpreting model predictions. In Advances in Neural Information Processing Systems. 4765–4774.
    Google ScholarLocate open access versionFindings
  • P. Maruff, E. Thomas, L. Cysique, B. Brew, A. Collie, P. Snyder, and R. H. Pietrzak. 2009. Validity of the CogState Brief Battery: Relationship to Standardized Tests and Sensitivity to Cognitive Impairment in Mild Traumatic Brain Injury, Schizophrenia, and AIDS Dementia Complex. Archives of Clinical Neuropsychology 24, 2 (mar 2009), 165–178.
    Google ScholarLocate open access versionFindings
  • Manuel M Montero-Odasso, Yanina Sarquis-Adamson, Mark Speechley, Michael J Borrie, Vladimir C Hachinski, Jennie Wells, Patricia M Riccio, Marcelo Schapira, Ervin Sejdic, Richard M Camicioli, et al. 2017. Association of dual-task gait with incident dementia in mild cognitive impairment: results from the gait and brain study. JAMA neurology 74, 7 (2017), 857–865.
    Google ScholarLocate open access versionFindings
  • Ziad S. Nasreddine, Natalie A. Phillips, Valérie Bédirian, Simon Charbonneau, Victor Whitehead, Isabelle Collin, Jeffrey L. Cummings, and Howard Chertkow. 2005. The Montreal Cognitive Assessment, MoCA: A Brief Screening Tool For Mild Cognitive Impairment. Journal of the American Geriatrics Society 53, 4 (2005), 695–699.
    Google ScholarLocate open access versionFindings
  • World Health Organization et al. 2017. Global action plan on the public health response to dementia 2017–2025. (2017).
    Google ScholarFindings
  • Emma Pierson, Tim Althoff, and Jure Leskovec. 2018. Modeling Individual Cyclic Variation in Human Behavior. In Proceedings of the 2018 World Wide Web Conference on World Wide Web. International World Wide Web Conferences Steering Committee, 107–116.
    Google ScholarLocate open access versionFindings
  • Tom Quisel, Luca Foschini, Alessio Signorini, and David C Kale. 2017. Collecting and analyzing millions of mhealth data streams. In Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 1971–1980.
    Google ScholarLocate open access versionFindings
  • Takaya Saito and Marc Rehmsmeier. 2015. The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets. PloS one 10, 3 (2015), e0118432.
    Google ScholarLocate open access versionFindings
  • G Stringer, S Couth, LJE Brown, D Montaldi, A Gledson, et al. 2018. Can you detect early dementia from an email? A proof of principle study of daily computer use to detect cognitive and functional decline. International Journal of Geriatric Psychiatry 33 (2018), 867–874.
    Google ScholarLocate open access versionFindings
  • Matei Zaharia, Reynold S Xin, Patrick Wendell, Tathagata Das, Michael Armbrust, Ankur Dave, Xiangrui Meng, Josh Rosen, Shivaram Venkataraman, Michael J Franklin, et al. 2016. Apache spark: a unified engine for big data processing. Commun. ACM 59, 11 (2016), 56–65. Over what time-frame was the data collected? Data was collected over a 12 week period for each participant. In all, data was collected from December 2017 to November 2018.
    Google ScholarLocate open access versionFindings
  • Gebru, Timnit, et al. 2018. Datasheets for Datasets. arXiv preprint arXiv:1803.09010 5Mitchell, Margaret, et al 2018. Model Cards for Model Reporting. arXiv preprint arXiv:1810.03993
    Findings
Your rating :
0

 

Tags
Comments
数据免责声明
页面数据均来自互联网公开来源、合作出版商和通过AI技术自动分析结果,我们不对页面数据的有效性、准确性、正确性、可靠性、完整性和及时性做出任何承诺和保证。若有疑问,可以通过电子邮件方式联系我们:report@aminer.cn
小科