Affinity Lens - Data-Assisted Affinity Diagramming with Augmented Reality

Hariharan Subramonyam
Hariharan Subramonyam

CHI, pp. 3982019.

Cited by: 2|Bibtex|Views44|Links
EI
Keywords:
affinity diagramming augmented reality visual analytics
Weibo:
With Affinity Lens, we have demonstrated how data-assisted affinity diagrams can be implemented with low-cost, mobile devices while maintaining the lightweight benefits of existing Affinity Diagrams practice

Abstract:

Despite the availability of software to support Affinity Diagramming (AD), practitioners still largely favor physical sticky-notes. Physical notes are easy to set-up, can be moved around in space and offer flexibility when clustering un-structured data. However, when working with mixed data sources such as surveys, designers often trade o...More

Code:

Data:

0
Introduction
  • Affinity Diagrams (AD) and related approaches are the method of choice for many designers and UX researchers.
  • AD supports analysis and synthesis of interview notes, brainstorming, creating user personas, and evaluating interactive prototypes [24].
  • Notes can be placed on walls or surfaces in a way that leverages spatial cognition, offers flexibility in grouping and clustering, and physically persists
  • Both individuals and groups can participate on large shared surfaces.
  • AD users work to derive structure from inherently fuzzy and seemingly unstructured input.
Highlights
  • Affinity Diagrams (AD) and related approaches are the method of choice for many designers and UX researchers
  • We used the same task and study protocol as in section 3, but instead of having the data directly printed on the notes, we added an ArUco marker to bind the note to a data row
  • To encourage discussion between participants, we only provided a single Android mobile device (5.5. inches,1440 x 2560 pixels) with Affinity Lens running on the Chrome browser
  • As designers are increasingly working with sources of information that consist of both qualitative and quantitative data, they often desire analytical power beyond physical sticky notes
  • With Affinity Lens, we have demonstrated how data-assisted affinity diagrams can be implemented with low-cost, mobile devices while maintaining the lightweight benefits of existing Affinity Diagrams practice
  • We have only lightly explored the space of lenses, but already, users of the current system were enthusiastic about using Affinity Lens in their current Affinity Diagrams-related work tasks
Methods
  • The probe sessions allowed them to identify key tasks for data assistance
  • These were used to drive many of Affinity Lens features.
  • The two of five sessions that began clustering using data were less successful in completing tasks.
  • They took a lot longer to analyze text within each cluster and to interpret how the text and data made sense as a whole.
  • Though it would be relatively easy to implement, Affinity Lens does not, for example, suggest initial clusters
Results
  • To evaluate Affinity Lens, the authors conducted two different in-lab AD studies.
  • The first was a controlled study in which the authors determined whether end-users could effectively generate data insights using Affinity Lens.
  • In the second study, which was open-ended, the authors aimed to evaluate Affinity Lens in a realistic AD workflow.
  • The authors conducted three 90-minute sessions with four HCI design student (P1P4) and two UX professionals (P5-P6).
  • Inches,1440 x 2560 pixels) with Affinity Lens running on the Chrome browser
  • To encourage discussion between participants, the authors only provided a single Android mobile device (5.5. inches,1440 x 2560 pixels) with Affinity Lens running on the Chrome browser
Conclusion
  • DISCUSSION AND FUTURE

    WORK

    There is clearly a need for integrated sensemaking from qualitative and quantitative data when conducting mixedmethods research.
  • Through Affinity Lens’s AR overlays, the authors demonstrated how DAAD can enrich the analysis experience of survey data, a typical use-case within HCI research.
  • HCI work uses interaction logs, sensor streams, and multimedia content to inform system design and end-user behavior.
  • One can augment the text from think-aloud transcripts with interaction logs showing mouse clicks data, or overlay raw video footage of actual task execution for multiple participants in parallel.Affinity diagrams are used throughout academic and business communities as part of the design process.
  • The authors have only lightly explored the space of lenses, but already, users of the current system were enthusiastic about using Affinity Lens in their current AD-related work tasks
Summary
  • Introduction:

    Affinity Diagrams (AD) and related approaches are the method of choice for many designers and UX researchers.
  • AD supports analysis and synthesis of interview notes, brainstorming, creating user personas, and evaluating interactive prototypes [24].
  • Notes can be placed on walls or surfaces in a way that leverages spatial cognition, offers flexibility in grouping and clustering, and physically persists
  • Both individuals and groups can participate on large shared surfaces.
  • AD users work to derive structure from inherently fuzzy and seemingly unstructured input.
  • Methods:

    The probe sessions allowed them to identify key tasks for data assistance
  • These were used to drive many of Affinity Lens features.
  • The two of five sessions that began clustering using data were less successful in completing tasks.
  • They took a lot longer to analyze text within each cluster and to interpret how the text and data made sense as a whole.
  • Though it would be relatively easy to implement, Affinity Lens does not, for example, suggest initial clusters
  • Results:

    To evaluate Affinity Lens, the authors conducted two different in-lab AD studies.
  • The first was a controlled study in which the authors determined whether end-users could effectively generate data insights using Affinity Lens.
  • In the second study, which was open-ended, the authors aimed to evaluate Affinity Lens in a realistic AD workflow.
  • The authors conducted three 90-minute sessions with four HCI design student (P1P4) and two UX professionals (P5-P6).
  • Inches,1440 x 2560 pixels) with Affinity Lens running on the Chrome browser
  • To encourage discussion between participants, the authors only provided a single Android mobile device (5.5. inches,1440 x 2560 pixels) with Affinity Lens running on the Chrome browser
  • Conclusion:

    DISCUSSION AND FUTURE

    WORK

    There is clearly a need for integrated sensemaking from qualitative and quantitative data when conducting mixedmethods research.
  • Through Affinity Lens’s AR overlays, the authors demonstrated how DAAD can enrich the analysis experience of survey data, a typical use-case within HCI research.
  • HCI work uses interaction logs, sensor streams, and multimedia content to inform system design and end-user behavior.
  • One can augment the text from think-aloud transcripts with interaction logs showing mouse clicks data, or overlay raw video footage of actual task execution for multiple participants in parallel.Affinity diagrams are used throughout academic and business communities as part of the design process.
  • The authors have only lightly explored the space of lenses, but already, users of the current system were enthusiastic about using Affinity Lens in their current AD-related work tasks
Related work
  • Affinity diagramming (also known as the KJ Method) has been used extensively for over 50 years [42]. AD supports organizing and making sense of unstructured qualitative data through a bottom-up process. A schema is developed by individuals, or groups, who arrange and cluster paper notes based on similarity of content, i.e., affinity. Because of its wide use, several projects have worked to address the shortcomings of the basic, ‘pen-and-paper’ use. These have centered around several areas including remote collaboration, clusters creation assistance, explicit and implicit search mechanisms, general visual analytics systems, and systems to bridge digital and paper documents. We briefly touch upon each area to set the context for the Affinity Lens project.
Funding
  • Proposes Affinity Lens, a mobile-based augmented reality application for Data-Assisted Affinity Diagramming
  • Developed design principles for data-assisted AD and an initial collection of lenses
  • Finds that Affinity Lens supports easy switching between qualitative and quantitative ‘views’ of data, without surrendering the lightweight benefits of existing AD practice
  • Found that in many cases analysis involved data from surveys , sensor data , and interaction logs
  • Identified three main concerns: the affordances of physical notes should be maintained, additional data and insights should be easy to retrieve, and data should be available just-in-time, without disrupting the primary diagramming practice
Reference
  • Bilal Alsallakh, Luana Micallef, Wolfgang Aigner, Helwig Hauser, Silvia Miksch, and Peter Rodgers. 2016. The State-of-the-Art of Set Visualization. In Computer Graphics Forum, Vol. 35. Wiley Online Library, 234–260.
    Google ScholarLocate open access versionFindings
  • Christopher Andrews, Alex Endert, Beth Yost, and Chris North. 2011. Information visualization on large, high-resolution displays: Issues, challenges, and opportunities. Information Visualization 10, 4 (2011), 341–355.
    Google ScholarLocate open access versionFindings
  • Sumit Basu, Danyel Fisher, Steven M Drucker, and Hao Lu. 2010. Assisting Users with Clustering Tasks by Combining Metric Learning and Classification.. In AAAI.
    Google ScholarFindings
  • Patrick Baudisch and Ruth Rosenholtz. 2003. Halo: a technique for visualizing off-screen objects. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 481–488.
    Google ScholarLocate open access versionFindings
  • Eric A Bier, Maureen C Stone, Ken Pier, William Buxton, and Tony D DeRose. 1993. Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques. ACM, 73–80.
    Google ScholarLocate open access versionFindings
  • Erin Brady, Meredith Ringel Morris, Yu Zhong, Samuel White, and Jeffrey P Bigham. 2013. Visual challenges in the everyday lives of blind people. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2117–2126.
    Google ScholarLocate open access versionFindings
  • Senthil Chandrasegaran, Sriram Karthik Badam, Lorraine Kisselburgh, Karthik Ramani, and Niklas Elmqvist. 201Integrating visual analytics support for grounded theory practice in qualitative text analysis. In Computer Graphics Forum, Vol. 36. Wiley Online Library, 201–212.
    Google ScholarLocate open access versionFindings
  • Mangoslab Co. 201Nemonic Mini Printer. http://www.mangoslab.com/n/nemonic/?lang=en
    Findings
  • Intel Corporation. 2018. Open CV Library. https://docs.opencv.org/3.4.1/index.html
    Findings
  • Yanqing Cui, Jari Kangas, Jukka Holm, and Guido Grassel. 2013. Frontcamera video recordings as emotion responses to mobile photos shared within close-knit groups. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 981–990.
    Google ScholarLocate open access versionFindings
  • Douglass R Cutting, David R Karger, Jan O Pedersen, and John W Tukey. 2017. Scatter/gather: A cluster-based approach to browsing large document collections. In ACM SIGIR Forum, Vol. 51. ACM, 148– 159.
    Google ScholarLocate open access versionFindings
  • David Dearman and Khai N Truong. 2010. Why users of yahoo!: answers do not answer questions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 329–332.
    Google ScholarLocate open access versionFindings
  • Marie Desjardins, James MacGlashan, and Julia Ferraioli. 2007. Interactive visual clustering. In Proceedings of the 12th international conference on Intelligent user interfaces. ACM, 361–364.
    Google ScholarLocate open access versionFindings
  • Steven P. Dow, Alana Glassco, Jonathan Kass, Melissa Schwarz, Daniel L. Schwartz, and Scott R. Klemmer. 2012. Parallel Prototyping Leads to Better Design Results, More Divergence, and Increased Self-efficacy. Springer Berlin Heidelberg, Berlin, Heidelberg, 127–153. https://doi.org/10.1007/978-3-642-21643-5_8
    Findings
  • Steven M Drucker, Danyel Fisher, and Sumit Basu. 2011. Helping users sort faster with adaptive machine learning recommendations. In IFIP Conference on Human-Computer Interaction. Springer, 187–203.
    Google ScholarLocate open access versionFindings
  • Susan Dumais, Edward Cutrell, Raman Sarin, and Eric Horvitz. 2004. Implicit Queries (IQ) for Contextualized Search. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR ’04). ACM, New York, NY, USA, 594–594. https://doi.org/10.1145/1008992.1009137
    Locate open access versionFindings
  • Johannes Fuchs, Roman Rädle, Dominik Sacha, Fabian Fischer, and Andreas Stoffel. 2013. Collaborative data analysis with smart tangible devices. In IS&T/SPIE Electronic Imaging. International Society for Optics and Photonics, 90170C–90170C.
    Google ScholarLocate open access versionFindings
  • Sergio Garrido-Jurado, Rafael Muñoz-Salinas, Francisco José MadridCuevas, and Manuel Jesús Marín-Jiménez. 2014. Automatic generation and detection of highly reliable fiducial markers under occlusion. Pattern Recognition 47, 6 (2014), 2280–2292.
    Google ScholarLocate open access versionFindings
  • Florian Geyer, Ulrike Pfeil, Jochen Budzinski, Anita Höchtl, and Harald Reiterer. 2011. Affinitytable-a hybrid surface for supporting affinity diagramming. In IFIP Conference on Human-Computer Interaction. Springer, 477–484.
    Google ScholarLocate open access versionFindings
  • Gunnar Harboe and Elaine M Huang. 2015. Real-world affinity diagramming practices: Bridging the paper-digital gap. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems. ACM, 95–104.
    Google ScholarLocate open access versionFindings
  • Gunnar Harboe, Crysta J Metcalf, Frank Bentley, Joe Tullio, Noel Massey, and Guy Romano. 2008. Ambient social tv: drawing people into a shared experience. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1–10.
    Google ScholarLocate open access versionFindings
  • Gunnar Harboe, Jonas Minke, Ioana Ilea, and Elaine M. Huang. 2012. Computer Support for Collaborative Data Analysis: Augmenting Paper Affinity Diagrams. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work (CSCW ’12). ACM, New York, NY, USA, 1179–1182. https://doi.org/10.1145/2145204.2145379
    Locate open access versionFindings
  • Chris Harrison, John Horstman, Gary Hsieh, and Scott Hudson. 2012. Unlocking the expressivity of point lights. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 1683–1692.
    Google ScholarLocate open access versionFindings
  • Rex Hartson and Pardha S Pyla. 2012. The UX Book: Process and guidelines for ensuring a quality user experience. Elsevier.
    Google ScholarFindings
  • Elaine M Huang, Gunnar Harboe, Joe Tullio, Ashley Novak, Noel Massey, Crysta J Metcalf, and Guy Romano. 2009. Of social television comes home: a field study of communication choices and practices in tv-based text and voice chat. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 585–594.
    Google ScholarLocate open access versionFindings
  • Elaine M Huang and Khai N Truong. 2008. Breaking the disposable technology paradigm: opportunities for sustainable interaction design for mobile phones. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 323–332.
    Google ScholarLocate open access versionFindings
  • Petra Isenberg and Danyel Fisher. 2009. Collaborative Brushing and Linking for Co-located Visual Analytics of Document Collections. In Computer Graphics Forum, Vol. 28. Wiley Online Library, 1031–1038.
    Google ScholarLocate open access versionFindings
  • Hiroshi Ishii and Brygg Ullmer. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the ACM SIGCHI Conference on Human factors in computing systems. ACM, 234– 241.
    Google ScholarLocate open access versionFindings
  • Robert JK Jacob, Hiroshi Ishii, Gian Pangaro, and James Patten. 2002. A tangible interface for organizing information using a grid. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 339–346.
    Google ScholarLocate open access versionFindings
  • Seokhee Jeon, Jane Hwang, Gerard J Kim, and Mark Billinghurst. 2006. Interaction techniques in large display environments using hand-held devices. In Proceedings of the ACM symposium on Virtual reality software and technology. ACM, 100–103.
    Google ScholarLocate open access versionFindings
  • Tero Jokela and Andrés Lucero. 2013. A comparative evaluation of touch-based methods to bind mobile devices for collaborative interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 3355–3364.
    Google ScholarLocate open access versionFindings
  • William P Jones and Susan T Dumais. 1986. The spatial metaphor for user interfaces: experimental tests of reference by location versus name. ACM Transactions on Information Systems (TOIS) 4, 1 (1986), 42–63.
    Google ScholarLocate open access versionFindings
  • Scott Klemmer, Mark W Newman, and Raecine Sapien. 2000. The designer’s outpost: a task-centered tangible interface for web site information design. In CHI’00 extended abstracts on Human factors in computing systems. ACM, 333–334.
    Google ScholarFindings
  • Beth M Lange, Mark A Jones, and James L Meyers. 1998. Insight lab: an immersive team environment linking paper, displays, and data. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM Press/Addison-Wesley Publishing Co., 550–557.
    Google ScholarLocate open access versionFindings
  • Hanseung Lee, Jaeyeon Kihm, Jaegul Choo, John Stasko, and Haesun Park. 2012. iVisClustering: An interactive visual document clustering via topic modeling. In Computer Graphics Forum, Vol. 31. Wiley Online Library, 1155–1164.
    Google ScholarLocate open access versionFindings
  • Zhicheng Liu, Bernard Kerr, Mira Dontcheva, Justin Grover, Matthew Hoffman, and Alan Wilson. 2017. CoreFlow: Extracting and Visualizing Branching Patterns from Event Sequences. In Computer Graphics Forum, Vol. Wiley Online Library, 527–538.
    Google ScholarLocate open access versionFindings
  • Thomas W. Malone. 1983. How Do People Organize Their Desks?: Implications for the Design of Office Information Systems. ACM Trans. Inf. Syst. 1, 1 (Jan. 1983), 99–112. https://doi.org/10.1145/357423.357430
    Locate open access versionFindings
  • Juan Mellado. 2018. ArUco JavaScript. https://github.com/jcmellado/js-aruco
    Findings
  • Thomas P Moran, Eric Saund, William Van Melle, Anuj U Gujar, Kenneth P Fishkin, and Beverly L Harrison. 1999. Design and technology for Collaborage: collaborative collages of information on physical walls. In Proceedings of the 12th annual ACM symposium on User interface software and technology. ACM, 197–206.
    Google ScholarLocate open access versionFindings
  • Bora Pajo. 2017. Food choices: College students’ food and cooking preferences. hhttps://www.kaggle.com/borapajo/food-choices.
    Findings
  • Peter Pirolli and Stuart Card. 2005. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proceedings of international conference on intelligence analysis, Vol. 5. 2–4.
    Google ScholarLocate open access versionFindings
  • Raymond Scupin. 1997. The KJ method: A technique for analyzing data derived from Japanese ethnology. Human organization 56, 2 (1997), 233–237.
    Google ScholarLocate open access versionFindings
  • John Stasko, Carsten Görg, and Zhicheng Liu. 2008. Jigsaw: supporting investigative analysis through interactive visualization. Information visualization 7, 2 (2008), 118–132.
    Google ScholarLocate open access versionFindings
  • Drew Steedly, Chris Pal, and Richard Szeliski. 2005. Efficiently Registering Video into Panoramic Mosaics. In Proceedings of the Tenth IEEE International Conference on Computer Vision - Volume 2 (ICCV ’05). IEEE Computer Society, Washington, DC, USA, 1300–1307. https://doi.org/10.1109/ICCV.2005.86
    Locate open access versionFindings
  • Edward Tse, Saul Greenberg, Chia Shen, Clifton Forlines, and Ryo Kodama. 2008. Exploring true multi-user multimodal interaction over a digital table. In Proceedings of the 7th ACM conference on Designing interactive systems. ACM, 109–118.
    Google ScholarLocate open access versionFindings
  • William Widjaja, Keito Yoshii, Kiyokazu Haga, and Makoto Takahashi. 2013. Discusys: Multiple user real-time digital sticky-note affinitydiagram brainstorming system. Procedia Computer Science 22 (2013), 113–122.
    Google ScholarLocate open access versionFindings
  • William Wright, David Schroh, Pascale Proulx, Alex Skaburskis, and Brian Cort. 2006. The Sandbox for analysis: concepts and methods. In Proceedings of the SIGCHI conference on Human Factors in computing systems. ACM, 801–810.
    Google ScholarLocate open access versionFindings
  • Jun Xiao and Jian Fan. 2009. PrintMarmoset: redesigning the print button for sustainability. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 109–112.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2019
Tags
Comments