Unremarkable AI - Fitting Intelligent Decision Support into Critical, Clinical Decision-Making Processes

CHI, pp. 2382019.

Cited by: 12|Bibtex|Views49|Links
EI
Keywords:
decision support systems healthcare user experience
Weibo:
We report fndings across the three sites related to the aforementioned assessment goals: the likelihood of encountering decision support tools during decision-making, the acceptance of decision support tools, the right level of remarkableness, and fnally, generalizability to othe...

Abstract:

Clinical decision support tools (DST) promise improved healthcare outcomes by offering data-driven insights. While effective in lab settings, almost all DSTs have failed in practice. Empirical research diagnosed poor contextual fit as the cause. This paper describes the design and field evaluation of a radically new form of DST. It automa...More

Code:

Data:

0
Introduction
  • The idea of leveraging machine intelligence in healthcare in the form of decision support tools (DSTs) has fascinated healthcare and AI researchers for decades.
  • With the adoption of electronic medical records and the explosive technical advances in machine learning (ML) in recent years, seems a perfect time for DSTs to impact healthcare practice.
  • Almost all these tools have failed when migrating from research labs to clinical practice in the past 30 years [5, 8, 9].
  • The interaction design of most clinical decision support tools instead assumes that individual clinicians will recognize when they need help, walk up and use a system that is separate from the electronic health record, and that they want and will trust the system’s output
Highlights
  • The idea of leveraging machine intelligence in healthcare in the form of decision support tools (DSTs) has fascinated healthcare and AI researchers for decades
  • In a review of deployed decision support tools, healthcare researchers ranked the lack of HCI considerations as the most likely reason for failure [12, 23]
  • The interaction design of most clinical decision support tools instead assumes that individual clinicians will recognize when they need help, walk up and use a system that is separate from the electronic health record, and that they want and will trust the system’s output
  • We wanted decision makers to encounter the computational advice at a relevant time and place across the decision process, and we wanted this support to only slow them down for the few cases where the decision support tools adds value to the decision. This design draws inspiration from Tolmie et al.’s notion of Unremarkable Computing, that technology needs to have the right level of remarkableness to valuably situate itself in people emerging routines and becoming the glue of their everyday lives [22]. This paper presents this decision support tools’s interaction design as well as a feld evaluation at three VAD implant hospitals
  • We report fndings across the three sites related to the aforementioned assessment goals: the likelihood of encountering decision support tools during decision-making, the acceptance of decision support tools, the right level of remarkableness, and fnally, generalizability to other kinds of medical decisions
  • Prior work suggests that current interaction conventions, that clinicians will recognize their own need for a decision support tools help and walk up and use a system separate from the Electronic Medical Records, is not likely to work [6]
Methods
  • DESIGN PROCESS AND RATIONALE

    The authors set out to design a new form of DST for VAD patient selection to explore how to overcome its real-world adoption barriers that many prognostic DSTs face.
  • The authors wanted to assess the design within the context of an actual implant decision meeting in order to observe whether it impacted discussion.
  • This proved to be impractical.
  • None of the sites would allow them to present slides showing information for the patients they were currently implanting
  • All felt this could impact the life and death decision.
Results
  • The authors frst ofer an overview of observations from the individual sites, describing the diferent cultures, facilities, and practices.
  • The authors report fndings across the three sites related to the aforementioned assessment goals: the likelihood of encountering DST during decision-making, the acceptance of DST, the right level of remarkableness, and fnally, generalizability to other kinds of medical decisions.
  • Hospital A was the least technologically advanced.
  • They recently transitioned from paper-based to electronic clinical records.
  • Many common web services, such as Google search, were blocked on their internal network
Conclusion
  • DESIGNING AND EVALUATING DST AS A SITUATED EXPERIENCE

    Clinical DSTs, despite compelling evidence of their efectiveness in labs, have mostly failed when moving out of labs and into healthcare practice [16, 20].
  • There is a real need to design DSTs not only as a functional utility but as an integrated experience.
  • Their efectiveness should be measured not only by prediction accuracy, but by efectiveness when situated within its social and physical context such as workplace culture and social structures.
  • This presents exiting new opportunities and challenges to HCI and UX research
Summary
  • Introduction:

    The idea of leveraging machine intelligence in healthcare in the form of decision support tools (DSTs) has fascinated healthcare and AI researchers for decades.
  • With the adoption of electronic medical records and the explosive technical advances in machine learning (ML) in recent years, seems a perfect time for DSTs to impact healthcare practice.
  • Almost all these tools have failed when migrating from research labs to clinical practice in the past 30 years [5, 8, 9].
  • The interaction design of most clinical decision support tools instead assumes that individual clinicians will recognize when they need help, walk up and use a system that is separate from the electronic health record, and that they want and will trust the system’s output
  • Methods:

    DESIGN PROCESS AND RATIONALE

    The authors set out to design a new form of DST for VAD patient selection to explore how to overcome its real-world adoption barriers that many prognostic DSTs face.
  • The authors wanted to assess the design within the context of an actual implant decision meeting in order to observe whether it impacted discussion.
  • This proved to be impractical.
  • None of the sites would allow them to present slides showing information for the patients they were currently implanting
  • All felt this could impact the life and death decision.
  • Results:

    The authors frst ofer an overview of observations from the individual sites, describing the diferent cultures, facilities, and practices.
  • The authors report fndings across the three sites related to the aforementioned assessment goals: the likelihood of encountering DST during decision-making, the acceptance of DST, the right level of remarkableness, and fnally, generalizability to other kinds of medical decisions.
  • Hospital A was the least technologically advanced.
  • They recently transitioned from paper-based to electronic clinical records.
  • Many common web services, such as Google search, were blocked on their internal network
  • Conclusion:

    DESIGNING AND EVALUATING DST AS A SITUATED EXPERIENCE

    Clinical DSTs, despite compelling evidence of their efectiveness in labs, have mostly failed when moving out of labs and into healthcare practice [16, 20].
  • There is a real need to design DSTs not only as a functional utility but as an integrated experience.
  • Their efectiveness should be measured not only by prediction accuracy, but by efectiveness when situated within its social and physical context such as workplace culture and social structures.
  • This presents exiting new opportunities and challenges to HCI and UX research
Related work
  • Clinical Decision Support Tools in Practice

    Clinical decision support tools (DSTs) are computational systems that support one of three tasks: diagnosing patients, selecting treatments, or making prognostic predictions of the likely course of a disease or outcome of a treatment [25].

    This project focuses on clinician-facing, prognostic DSTs. A signifcant strand of recent HCI work focused on critical issues in this area, including AI interpretability and fairness, data visualization, accuracy of risk communication, and more [18, 19, 21]. The signifcance of this body of work has led some to describe it as “the rise of design science in clinical DST research" [1]. These studies typically investigated DST in lab settings, using prototypes that are dedicated to a single clinical decision. Clinicians came out of their day-to-day workfow, used these systems for a pre-identifed task, then provided feedback on the system design.
Funding
  • This work was supported by grants from NIH, National Heart, Lung, and Blood Institute (NHLBI) # 1R01HL122639-01A1
  • The frst author was also supported by the Center for Machine Learning and Health (CMLH) Fellowships in Digital Health
Reference
  • David Arnott and Graham Pervan. 2014. A critical analysis of decision support systems research revisited: the rise of design science. Journal of Information Technology 29, 4 (01 Dec 2014), 269–293. https://doi.org/10.1057/jit.2014.16
    Locate open access versionFindings
  • Raymond L Benza, Dave P Miller, Robyn J Barst, David B Badesch, Adaani E Frost, and Michael D McGoon. 201An evaluation of longterm survival from time of diagnosis in pulmonary arterial hypertension from the REVEAL Registry. CHEST Journal 142, 2 (2012), 448–456.
    Google ScholarLocate open access versionFindings
  • David Coyle and Gavin Doherty. 2009. Clinical Evaluations and Collaborative Design: Developing New Technologies for Mental Healthcare Interventions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’09). ACM, New York, NY, USA, 2051–2060. https://doi.org/10.1145/1518701.1519013
    Locate open access versionFindings
  • Srikant Devaraj, Sushil K Sharma, Dyan J Fausto, Sara Viernes, and Hadi Kharrazi. 201Barriers and Facilitators to Clinical Decision Support Systems Adoption: A Systematic Review. Journal of Business Administration Research 3, 2 (2014), p36.
    Google ScholarLocate open access versionFindings
  • Glyn Elwyn, Isabelle Scholl, Caroline Tietbohl, Mala Mann, Adrian GK Edwards, Catharine Clay, France Légaré, Trudy van der Weijden, Carmen L Lewis, Richard M Wexler, et al. 2013. “Many miles to go...": a systematic review of the implementation of patient decision support interventions into routine clinical practice. BMC medical informatics and decision making 13, Suppl 2 (2013), S14.
    Google ScholarLocate open access versionFindings
  • Hidden for Anonymity During Review. 201Hidden for Anonymity During Review.
    Google ScholarFindings
  • Karine Gravel, France Légaré, and Ian D Graham. 2006. Barriers and facilitators to implementing shared decision-making in clinical practice: a systematic review of health professionals’ perceptions. Implement Sci 1, 1 (2006), 16.
    Google ScholarLocate open access versionFindings
  • Monique WM Jaspers, Marian Smeulers, Hester Vermeulen, and Linda W Peute. 2011. Efects of clinical decision-support systems on practitioner performance and patient outcomes: a synthesis of highquality systematic review fndings. Journal of the American Medical Informatics Association 18, 3 (2011), 327–334.
    Google ScholarLocate open access versionFindings
  • Kensaku Kawamoto, Caitlin A Houlihan, E Andrew Balas, and David F Lobach. 2005. Improving clinical practice using clinical decision support systems: a systematic review of trials to identify features critical to success. Bmj 330, 7494 (2005), 765.
    Google ScholarLocate open access versionFindings
  • Leah Kulp and Aleksandra Sarcevic. 2018. Design In The “Medical” Wild: Challenges Of Technology Deployment. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18). ACM, New York, NY, USA, Article LBW040, 6 pages. https://doi.org/10.1145/3170427.3188571
    Locate open access versionFindings
  • Bill Moggridge. 2007. Designing interactions. Vol. 14.
    Google ScholarFindings
  • Mark A Musen, Blackford Middleton, and Robert A Greenes. 2014.
    Google ScholarFindings
  • Annette M OâĂŹConnor, John E Wennberg, France Legare, Hilary A Llewellyn-Thomas, Benjamin W Moulton, Karen R Sepucha, Andrea G Sodano, and Jaime S King. 2007. Toward the âĂŸtipping pointâĂŹ: decision aids and informed patient choice. Health Afairs 26, 3 (2007), 716–725.
    Google ScholarLocate open access versionFindings
  • Brindha Pillay, Addie C Wootten, Helen Crowe, Niall Corcoran, Ben Tran, Patrick Bowden, Jane Crowe, and Anthony J Costello. 2016. The impact of multidisciplinary team meetings on patient assessment, management and outcomes in oncology settings: a systematic review of the literature. Cancer treatment reviews 42 (2016), 56–72.
    Google ScholarLocate open access versionFindings
  • Kate Sellen, Dominic Furniss, Yunan Chen, Svetlena Taneva, Aisling Ann O’Kane, and Ann Blandford. 2014. Workshop Abstract: HCI Research in Healthcare: Using Theory from Evidence to Practice. In CHI ’14 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’14). ACM, New York, NY, USA, 87–90. https://doi.org/10.1145/2559206.2559240
    Locate open access versionFindings
  • Dean F Sittig, Adam Wright, Jerome A Osherof, Blackford Middleton, Jonathan M Teich, Joan S Ash, Emily Campbell, and David W Bates. 2008. Grand challenges in clinical decision support. Journal of Biomedical Informatics 41 (2008), 387–392.
    Google ScholarLocate open access versionFindings
  • Mark S. Slaughter, Francis D. Pagani, Joseph G. Rogers, Leslie W. Miller, Benjamin Sun, Stuart D. Russell, Randall C. Starling, Leway Chen, Andrew J. Boyle, Suzanne Chillcott, Robert M. Adamson, Margaret S. Blood, Margarita T. Camacho, Katherine A. Idrissi, Michael Petty, Michael Sobieski, Susan Wright, Timothy J. Myers, and David J. Farrar. 2010. Clinical management of continuous-fow left ventricular assist devices in advanced heart failure. The Journal of Heart and Lung Transplantation 29, 4, Supplement (2010), S1 – S39. https://doi.org/10.1016/j.healun.2010.01.011
    Locate open access versionFindings
  • Nicole Sultanum, Michael Brudno, Daniel Wigdor, and Fanny Chevalier. 20More Text Please! Understanding and Supporting the Use of Visualization for Clinical Text Overview. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (CHI ’18). ACM, New York, NY, USA, Article 422, 13 pages. https://doi.org/10.1145/3173574.3173996
    Locate open access versionFindings
  • Alan R Tait, Terri Voepel-Lewis, Brian J Zikmund-Fisher, and Angela Fagerlin. 2010. The efect of format on parents’ understanding of the risks and benefts of clinical research: a comparison between text, tables, and graphics. Journal of health communication 15, 5 (2010), 487–501.
    Google ScholarLocate open access versionFindings
  • Svetlena Taneva, Waxberg Sara, Goss Julian, Rossos Peter, Nicholas Emily, and Cafazzo Joseph. 2014. The Meaning of Design in Healthcare: Industry, Academia, Visual Design, Clinician, Patient and Hf Consultant Perspectives. In Proceedings of the Extended Abstracts of the 32Nd Annual ACM Conference on Human Factors in Computing Systems (CHI EA ’14). ACM, New York, NY, USA, 1099–1104. https://doi.org/10.1145/2559206.2579407
    Locate open access versionFindings
  • Danielle Timmermans, Bert Molewijk, Anne Stiggelbout, and Job Kievit. 2004. Diferent formats for communicating surgical risks to patients and the efect on choice of treatment. Patient education and counseling 54, 3 (2004), 255–263.
    Google ScholarLocate open access versionFindings
  • Peter Tolmie, James Pycock, Tim Diggins, Allan MacLean, and Alain Karsenty. 2002. Unremarkable Computing. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI ’02). ACM, New York, NY, USA, 399–406. https://doi.org/10.1145/503376.503448
    Locate open access versionFindings
  • Robert L Wears and Marc Berg. 2005. Computer technology and clinical work: still waiting for Godot. Jama 293, 10 (2005), 1261–1263.
    Google ScholarLocate open access versionFindings
  • Jeremy C Wyatt and Douglas G Altman. 1995. Commentary: Prognostic models: clinically useful or quickly forgotten? Bmj 311, 7019 (1995), 1539–1541.
    Google ScholarLocate open access versionFindings
  • Qian Yang, John Zimmerman, and Aaron Steinfeld. 2015. Review of Medical Decision Support Tools: Emerging Opportunity for Interaction Design. In IASDR 2015 Interplay Proceedings.
    Google ScholarLocate open access versionFindings
  • Qian Yang, John Zimmerman, Aaron Steinfeld, Lisa Carey, and James F. Antaki. 2016. Investigating the Heart Pump Implant Decision Process: Opportunities for Decision Support Tools to Help. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ’16). ACM, New York, NY, USA, 4477–4488. https://doi.org/10.1145/2858036.2858373
    Locate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2019
Tags
Comments