Understanding Public Evaluation: Quantifying Experimenter Intervention

CHI, pp. 3414-3425, 2017.

Cited by: 6|Bibtex|Views30|Links
EI
Keywords:
covert observerreal worldIn the Wild Methodsovert observersteward observerMore(3+)
Weibo:
Our evaluation cast the experimenter as a covert observer, overt observer, and steward observer

Abstract:

Public evaluations are popular because some research questions can only be answered by turning \"to the wild.\" Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our unde...More

Code:

Data:

0
Introduction
  • The butterfly effect refers to small changes metaphorically the fluttering of a butterfly's wings - having a large knock-on effect on the weather.
  • The authors restrict the scope to “in the wild” studies of stationary technology in public and semipublic places, for example public displays and interactive installations.
  • This involves intervening within a real world place.
  • The authors refer to studies of these interventions as public evaluations throughout this paper
Highlights
  • In meteorology, the butterfly effect refers to small changes metaphorically the fluttering of a butterfly's wings - having a large knock-on effect on the weather
  • This paper provides an empirical basis to consider the effect of experimenter roles in different evaluation approaches
  • The order of the condition blocks was randomised, with the display installed during daytime hours between 10:00 and 18:00
  • Our evaluation cast the experimenter as a covert observer, overt observer, and steward observer
  • We propose that systematic control of experimenter roles in public evaluations, and the use of high-density, high-quality measurements like pedestrian tracking are essential in quantifying the observer effect in the fragile and unstable domain of public evaluations
Results
  • Data was collected in two separate weeks to capture “high” and “low” pedestrian traffic levels, as shown in Figure 4.
  • The order of the condition blocks was randomised, with the display installed during daytime hours between 10:00 and 18:00.
  • These results are based on a total twelve hours of interaction data and twenty hours of baseline pedestrian motion data gathered in the walkway.
  • Figure 4 shows the footfall per hour during the baseline datasets and Figure 6 shows visualisations of the pedestrian traffic on the baseline days.
Conclusion
  • Perhaps the most important result is the significantly lowered conversion rates when an overt observer is present.
  • The authors propose that systematic control of experimenter roles in public evaluations, and the use of high-density, high-quality measurements like pedestrian tracking are essential in quantifying the observer effect in the fragile and unstable domain of public evaluations.
  • This protocol gives qualitative researchers a way to bracket the authenticity of their results with quantitative, replicable metrics
Summary
  • Introduction:

    The butterfly effect refers to small changes metaphorically the fluttering of a butterfly's wings - having a large knock-on effect on the weather.
  • The authors restrict the scope to “in the wild” studies of stationary technology in public and semipublic places, for example public displays and interactive installations.
  • This involves intervening within a real world place.
  • The authors refer to studies of these interventions as public evaluations throughout this paper
  • Results:

    Data was collected in two separate weeks to capture “high” and “low” pedestrian traffic levels, as shown in Figure 4.
  • The order of the condition blocks was randomised, with the display installed during daytime hours between 10:00 and 18:00.
  • These results are based on a total twelve hours of interaction data and twenty hours of baseline pedestrian motion data gathered in the walkway.
  • Figure 4 shows the footfall per hour during the baseline datasets and Figure 6 shows visualisations of the pedestrian traffic on the baseline days.
  • Conclusion:

    Perhaps the most important result is the significantly lowered conversion rates when an overt observer is present.
  • The authors propose that systematic control of experimenter roles in public evaluations, and the use of high-density, high-quality measurements like pedestrian tracking are essential in quantifying the observer effect in the fragile and unstable domain of public evaluations.
  • This protocol gives qualitative researchers a way to bracket the authenticity of their results with quantitative, replicable metrics
Tables
  • Table1: Total numbers for passers-by and users observed during each condition. Users are counted as all silhouettes captured by the depth camera. Users are broken down into more refined categories in Figure 10
Download tables as Excel
Reference
  • Florian Alt, Stefan Schneegass, Albrecht Schmidt, Jörg Müller, and Nemanja Memarovic. 2012. How to evaluate public displays. 2012 International Symposium on Pervasive Displays (PerDis’12): #17.
    Google ScholarFindings
  • Eric P. S. Baumer, Jenna Burrell, Morgan G. Ames, Jed R. Brubaker, and Paul Dourish. 2015. On the importance and implications of studying technology non-use. Interactions 22, 2: 52–56.
    Google ScholarLocate open access versionFindings
  • Marek Bell, Matthew Chalmers, Louise Barkhuus, Malcolm Hall, Scott Sherwood, Paul Tennent, Barry Brown, Duncan Rowland, and Steve Benford. 2006. Interweaving Mobile Games With Everyday Life. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’06): 417–426.
    Google ScholarLocate open access versionFindings
  • Barry Brown, Stuart Reeves, and Scott Sherwood. 2011. Into the wild: Challenges and opportunities for field trial methods. SIGCHI Conference on Human Factors in Computing Systems (CHI’11), May: 1657–1666.
    Google ScholarLocate open access versionFindings
  • Scott Carter, Jennifer Mankoff, Scott R. Klemmer, and Tara Matthews. 2008. Exiting the Cleanroom: On Ecological Validity and Ubiquitous Computing. Human–Computer Interaction 23, 1: 47–99.
    Google ScholarLocate open access versionFindings
  • Sandy Claes, Niels Wouters, Karin Slegers, and Andrew Vande Moere. 2015. Controlling In-theWild Evaluation Studies of Public Displays. In Proceedings of the ACM CHI’15 Conference on Human Factors in Computing Systems, 81–84.
    Google ScholarLocate open access versionFindings
  • A Crabtree and A Chamberlain. 2013. Introduction to the special issue of “The Turn to The Wild.” ACM Transactions on... 20, 3: 0–3.
    Google ScholarLocate open access versionFindings
  • Andy Crabtree. 2004. Design in the absence of practice: breaching experiments. Proceedings of the 5th conference on Designing interactive systems: processes, practices, methods, and techniques: 59– 68.
    Google ScholarLocate open access versionFindings
  • Andy Crabtree. 2004. Taking technomethodology seriously: hybrid change in the ethnomethodology– design relationship. European Journal of Information Systems 13, October 2003: 195–209.
    Google ScholarLocate open access versionFindings
  • Paul Dourish. 2006. Implications for Design. SIGCHI conference on Human Factors in computing systems.\n: 541–550.
    Google ScholarLocate open access versionFindings
  • Elizabeth Evans, Martin Flintham, and Sarah Martindale. 2014. The Malthusian Paradox: performance in an alternate reality game. Personal and Ubiquitous Computing 18, 7: 1567–1582.
    Google ScholarLocate open access versionFindings
  • Dustin Freeman, Nathan LaPierre, Fanny Chevalier, and Derek Reilly. 2013. Tweetris. Proceedings of the 9th ACM Conference on Creativity & Cognition - C&C ’13: 224.
    Google ScholarLocate open access versionFindings
  • Uta Hinrichs, Simon Butscher, Jens Müller, and Harald Reiterer. 2016. Diving in at the Deep End: The Value of Alternative In-Situ Approaches for Systematic Library Search. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems: 4634–4646.
    Google ScholarLocate open access versionFindings
  • Eva Hornecker and Emma Nicol. What Do Labbased User Studies Tell Us About In-the-Wild Behavior ? Insights from a Study of Museum Interactives.
    Google ScholarFindings
  • Rose Johnson, Yvonne Rogers, Janet van der Linden, and Nadia Bianchi-Berthouze. 2012. Being in the Thick of In-the-wild Studies: The Challenges and Insights of Researcher Participation. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: 1135–1144.
    Google ScholarLocate open access versionFindings
  • Jesper Kjeldskov and Mikael B Skov. 2014. Was it Worth the Hassle ? Ten Years of Mobile HCI Research Discussions on Lab and Field Evaluations. Acm: 43–52.
    Google ScholarFindings
  • Jesper Kjeldskov, Mikael B Skov, Benedikte S Als, and Rune T Høegh. 2004. Is It Worth the Hassle ? Exploring the Added Value of Evaluating the Usability of Context-Aware Mobile Systems in the Field. Mobile Human-Computer Interaction Proceedings of the 6th International Symposium: 61–73.
    Google ScholarLocate open access versionFindings
  • Jean Lave, Lucy Suchman, and Ed Hutchins. 2013. Introduction to the Special Issue of “ The Turn to The Wild.” 20, 3: 0–3.
    Google ScholarFindings
  • P Marshall, Y Rogers, and N Pantidi. 2011. Using F-formations to analyse spatial patterns of interaction in physical environments. Cscw 2011, Cscw: 3033–3042.
    Google ScholarLocate open access versionFindings
  • Paul Marshall, Richard Morris, Yvonne Rogers, Stefan Kreitmayer, Matt Davies, and Milton Keynes. 2011. Rethinking “ Multi-user ”: An Inthe-Wild Study of How Groups Approach a WalkUp-and-Use Tabletop Interface. 3033–3042.
    Google ScholarFindings
  • Paul Marshall, Richard Morris, Yvonne Rogers, Stefan Kreitmayer, and Matthew Davies. 2011. Rethinking “Multi-User” - An In-The-Wild Study of How Groups Approach a Walk-Up-and-Use Tabletop Interface. Proceedings of the International Conference on Human Factors in Computing Systems (CHI’11): 3033–3042.
    Google ScholarLocate open access versionFindings
  • Daniel Michelis and Jörg Müller. 2011. The Audience Funnel: Observations of Gesture Based Interaction With Multiple Large Displays in a City Center. International Journal of Human-Computer Interaction 27, 6: 562–579.
    Google ScholarLocate open access versionFindings
  • Peter Peltonen, Esko Kurvinen, Antti Salovaara, Giulio Jacucci, Tommi Ilmonen, John Evans, Antti Oulasvirta, and Petri Saarikko. 2008. ``It’s Mine, Don’t Touch!’’: Interactions at a Large MultiTouch Display in a City Centre. Proceeding of the twenty-sixth annual CHI conference on Human factors in computing systems - CHI ’08: 1285.
    Google ScholarLocate open access versionFindings
  • Derek Reilly, Fanny Chevalier, and Dustin Freeman. 2014. Blending Arts Event and HCI Research. In Interactive Experience in the Digital Age: Evaluating New Art Practice. 22–23.
    Google ScholarLocate open access versionFindings
  • Julie Rico and Stephen A. Brewster. 2010. Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI’10): 887–896.
    Google ScholarLocate open access versionFindings
  • Y Rogers. 2011. Interaction design gone wild: striving for wild theory. Interactions 18, 4: 58–62.
    Google ScholarLocate open access versionFindings
  • Yvonne Rogers, Kay Connelly, Lenore Tedesco, William Hazlewood, Andrew Kurtz, Robert E. Hall, Josh Hursey, and Tammy Toscos. 2007. Why It’s Worth the Hassle - The Value of In-Situ Studies When Designing Ubicomp. UbiComp’07 4717: 336–353.
    Google ScholarLocate open access versionFindings
  • Jiamin Shi and Florian Alt. 2016. The Anonymous Audience Analyzer – Visualizing Audience Behavior in Public Space. CHI Extended Abstracts on Human Factors in Computing Systems: 3766– 3769.
    Google ScholarFindings
  • Robyn Taylor, Guy Schofield, John Shearer, Jayne Wallace, Peter Wright, Pierre Boulanger, and Patrick Olivier. 2011. Designing from within: humanaquarium. Proceedings of CHI 2011: 1855– 1864.
    Google ScholarLocate open access versionFindings
  • Robyn Taylor, Guy Schofield, John Shearer, Peter Wright, Pierre Boulanger, and Patrick Olivier. 2014. Nightingallery: theatrical framing and orchestration in participatory performance. Personal and Ubiquitous Computing 18, 7: 1583– 1600.
    Google ScholarLocate open access versionFindings
  • Daniel Vogel and Ravin Balakrishnan. 2004. Interactive Public Ambient Displays: Transitioning from Implicit to Explicit, Public to Personal, Interaction with Multiple Users. UIST ’04: Proceedings of the 17th annual ACM symposium on User interface software and technology 6, 2: 137– 146.
    Google ScholarLocate open access versionFindings
  • Annika Waern. 2016. The Ethics of Unaware
    Google ScholarFindings
  • 35. Julie R. Williamson, Daniel Sundén, and Jay Participation in Public Interventions. Proceedings
    Google ScholarLocate open access versionFindings
  • Bradley. 2015. GlobalFestival: Evaluating Real of the 2016 CHI Conference on Human Factors in
    Google ScholarFindings
  • Computing Systems: 803–814. Proceedings of the Joint International Conference
    Google ScholarLocate open access versionFindings
  • 33. Robert Walter, Gilles Bailly, and Jörg Müller. 2013. StrikeAPose: revealing mid-air gestures on public displays. In SIGCHI Conference on Human Factors on Pervasive and Ubiquitous Computing and the International Symposium on Wearable Computers (Ubicomp/ISWC’15): 1251–1261.
    Google ScholarLocate open access versionFindings
  • 36. Julie R. Williamson and John Williamson. 2014.
    Google ScholarFindings
  • 34. Julie R. Williamson and Daniel Sundén. 2015. Deep Cover HCI. Proceedings of the 33rd Annual ACM Conference Extended Abstracts on Human Factors in Computing Systems - CHI EA ’15: 543– Analysing Pedestrian Traffic Around Public Displays. In Proceedings of The International Symposium on Pervasive Displays - PerDis ’14, 13–18.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2017
Tags
Comments