Pinpointing: Precise Head- and Eye-Based Target Selection for Augmented Reality

CHI, 2018.

Cited by: 70|Bibtex|Views44|Links
EI
Keywords:
virtual realitygaze interactioneye gazehead movementcurrent stateMore(10+)
Weibo:
We investigated both eye gaze and head pointing combined with refinement provided by a handheld device, hand gesture input and scaled head motion

Abstract:

Head and eye movement can be leveraged to improve the useru0027s interaction repertoire for wearable displays. Head movements are deliberate and accurate, and provide the current state-of-the-art pointing technique. Eye gaze can potentially be faster and more ergonomic, but suffers from low accuracy due to calibration errors and drift of ...More

Code:

Data:

0
Introduction
  • Available head-worn Augmented Reality (AR) devices will become useful for mobile workers in many practical applications, such as controlling networks of smart objects [15], situated analytics of sensor data [14], or in-situ editing of CAD or architectural models [36].
  • For users to be mobile and productive, it is important to design interaction techniques that allow precise selection and manipulation of virtual objects, without bulky input devices.
  • This paper explores Pinpointing: multimodal head and eye gaze pointing techniques for wearable AR (Figure 1).
  • The authors build on prior work by adapting multimodal pointing refinement techniques for wearable AR, by combining gaze with hand gestures, handheld devices and head movement.
  • The authors further discuss the implications of these results for interface designers, and potential applications of Pinpointing techniques.
  • The authors demonstrate two example implementations for precise menu selection and online improvement of gaze calibration
Highlights
  • Recently available head-worn Augmented Reality (AR) devices will become useful for mobile workers in many practical applications, such as controlling networks of smart objects [15], situated analytics of sensor data [14], or in-situ editing of CAD or architectural models [36]
  • We studied the refinement techniques with one particular Augmented Reality platform, the interaction techniques can be ported to other wearable Augmented Reality and virtual reality systems
  • In summary, this work has taken a close look at a variety of multimodal techniques for precision target selection in Augmented Reality
  • We investigated both eye gaze and head pointing combined with refinement provided by a handheld device, hand gesture input and scaled head motion
  • Confirming previous work, eye gaze input alone is faster than head pointing, but the head pointing allows greater targeting accuracy
  • We further demonstrated two applications for Pinpointing, compact menu selection and online correction of eye gaze calibration
Methods
  • Implicit selections are made without any conscious effort by the user, whereas explicit selections require a deliberate user action.
  • While mechanisms that require no additional input mode have been studied (R3), such as head tilt [57] and dwell [28], the authors instead use simple yet reliable methods that provide fast and deliberate interaction (R5).
  • The authors' implementations use two simple triggers, a button click on a small device and a finger gesture, that both integrate cleanly with the refinement input modes described
Results
  • The primary results are summarized in Figure 4 and Figure 5.
  • The authors found out that Head+Device and Head+Head refinements did not increase the task load compared to Head only.
  • Head+Device and Head+Head were subjectively preferred over Head only.
  • The authors' introduced device-gyro refinement technique (Eye+Device) performed well, providing slightly better accuracy than Eye+Head and slightly faster than Eye+Gesture.
  • It was subjectively preferred over Eye+Head and Eye+Gesture and required lower perceived task load.
Conclusion
  • The accuracy of current state-of-the-art head pointing is not at the level where it should be to allow precise selections, two methods.
  • The authors showed in this study that refinement techniques can improve head pointing accuracy by a factor of three, in particular Head+Head, which provided the greatest precision
  • This technique allows interaction with very small objects such as small menu items or data points.In summary, this work has taken a close look at a variety of multimodal techniques for precision target selection in AR.
Summary
  • Introduction:

    Available head-worn Augmented Reality (AR) devices will become useful for mobile workers in many practical applications, such as controlling networks of smart objects [15], situated analytics of sensor data [14], or in-situ editing of CAD or architectural models [36].
  • For users to be mobile and productive, it is important to design interaction techniques that allow precise selection and manipulation of virtual objects, without bulky input devices.
  • This paper explores Pinpointing: multimodal head and eye gaze pointing techniques for wearable AR (Figure 1).
  • The authors build on prior work by adapting multimodal pointing refinement techniques for wearable AR, by combining gaze with hand gestures, handheld devices and head movement.
  • The authors further discuss the implications of these results for interface designers, and potential applications of Pinpointing techniques.
  • The authors demonstrate two example implementations for precise menu selection and online improvement of gaze calibration
  • Methods:

    Implicit selections are made without any conscious effort by the user, whereas explicit selections require a deliberate user action.
  • While mechanisms that require no additional input mode have been studied (R3), such as head tilt [57] and dwell [28], the authors instead use simple yet reliable methods that provide fast and deliberate interaction (R5).
  • The authors' implementations use two simple triggers, a button click on a small device and a finger gesture, that both integrate cleanly with the refinement input modes described
  • Results:

    The primary results are summarized in Figure 4 and Figure 5.
  • The authors found out that Head+Device and Head+Head refinements did not increase the task load compared to Head only.
  • Head+Device and Head+Head were subjectively preferred over Head only.
  • The authors' introduced device-gyro refinement technique (Eye+Device) performed well, providing slightly better accuracy than Eye+Head and slightly faster than Eye+Gesture.
  • It was subjectively preferred over Eye+Head and Eye+Gesture and required lower perceived task load.
  • Conclusion:

    The accuracy of current state-of-the-art head pointing is not at the level where it should be to allow precise selections, two methods.
  • The authors showed in this study that refinement techniques can improve head pointing accuracy by a factor of three, in particular Head+Head, which provided the greatest precision
  • This technique allows interaction with very small objects such as small menu items or data points.In summary, this work has taken a close look at a variety of multimodal techniques for precision target selection in AR.
Tables
  • Table1: Refinement technique mechanisms and feedback
  • Table2: Target sizes for each technique
Download tables as Excel
Related work
  • RELATED WORK ON GAZE BASED INTERACTION

    Our user study investigates head- and eye gaze-based interaction techniques coupled with different refinement techniques. We review the related work in the following.

    Head- and Eye-Based Target Selection Our study explores eye gaze as an input method, as well as head pointing, which can provide a proxy for gaze, but has become a separate method in its own right.

    Head-pointing Together with hand-based interaction techniques, headbased interaction has been actively investigated in the field of 3D user interface, virtual reality (VR) [6,11], desktop GUIs [5,29], assistive interfaces [37], and wearable computing [7]. One of the earliest works in interaction techniques for virtual environments [40] included head directed navigation and object selection. Recently headdirection-based pointing has been widely adopted as a standard way of pointing at virtual objects without using hands or hand-held pointing devices (e.g., Oculus Rift [44] and Microsoft HoloLens [39]). Atienza et al [1] further explored head-based interaction techniques in a VR environment. With wearable eye-tracking devices becoming affordable to use in combination with head-worn displays (e.g. Pupil Labs [32,51], FOVE [19]), researchers are increasingly exploring wearable eye gaze input [50,55].
Funding
  • The first author was supported by a Jorma Ollila grant from the Nokia Foundation and a grant from the Academy of Finland (grant number 311090)
Reference
  • Rowel Atienza, Ryan Blonna, Maria Isabel Saludares, Joel Casimiro, and Vivencio Fuentes. 2016. Interaction techniques using head gaze for virtual reality. In Proceedings - 2016 IEEE Region 10 Symposium, TENSYMP 2016, 110–114. https://doi.org/10.1109/TENCONSpring.2016.7519387
    Locate open access versionFindings
  • Mihai Bace, Teemu Leppänen, David Gil De Gomez, and Argenis Ramirez Gomez. 2016. ubiGaze: Ubiquitous Augmented Reality Messaging Using Gaze Gestures. In SIGGRAPH ASIA, Article no. 11.
    Google ScholarLocate open access versionFindings
  • Richard Bates and Howell Istance. 2003. Why are eye mice unpopular? A detailed comparison of head and eye controlled assistive technology pointing devices. Universal Access in the Information Society 2, 3: 280– 290. https://doi.org/10.1007/s10209-003-0053-y
    Locate open access versionFindings
  • Ana M. Bernardos, David Gómez, and José R. Casar. 2016. A Comparison of Head Pose and Deictic Pointing Interaction Methods for Smart Environments. International Journal of Human-Computer Interaction 32, 4: 325–351. https://doi.org/10.1080/10447318.2016.1142054
    Locate open access versionFindings
  • Martin Bichsel and Alex Pentland. 1993. Automatic interpretation of human head movements.
    Google ScholarFindings
  • Doug Bowman, Ernst Kruijff, Joseph J. LaViola, and Ivan Poupyrev. 2004. 3D User Interfaces: Theory and Practice. Addison Wesley Longman Publishing Co., Redwood City.
    Google ScholarFindings
  • Stephen Brewster, Joanna Lumsden, Marek Bell, Malcolm Hall, and Stuart Tasker. 2003. Multimodal “eyes-free” interaction techniques for wearable devices. Proceedings of the conference on Human factors in computing systems - CHI ’03, 5: 473. https://doi.org/10.1145/642611.642694
    Locate open access versionFindings
  • Benedetta. Cesqui, Rolf van de Langenberg, Francesco. Lacquaniti, and Andrea. D’Avella. 2013. A novel method for measuring gaze orientation in space in unrestrained head conditions. Journal of Vision 13, 8: 28:1-22. https://doi.org/10.1167/13.8.28
    Locate open access versionFindings
  • Ishan Chatterjee, Robert Xiao, and Chris Harrison. 2015. Gaze + Gesture: Expressive, Precise and Targeted Free-Space Interactions. In Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, 131–138. https://doi.org/10.1145/2818346.2820752
    Locate open access versionFindings
  • Ngip Khean Chuan and Ashok Sivaji. 2012. Combining eye gaze and hand tracking for pointer control in HCI: Developing a more robust and accurate interaction system for pointer positioning and clicking. In CHUSER 2012 - 2012 IEEE Colloquium on Humanities, Science and Engineering Research, 172– 176. https://doi.org/10.1109/CHUSER.2012.6504305
    Locate open access versionFindings
  • Rory M.S. Clifford, Nikita Mae B. Tuanquin, and Robert W. Lindeman. 2017. Jedi ForceExtension: Telekinesis as a Virtual Reality interaction metaphor. In 2017 IEEE Symposium on 3D User Interfaces, 3DUI 2017 - Proceedings, 239–240. https://doi.org/10.1109/3DUI.2017.7893360
    Locate open access versionFindings
  • Nathan Cournia, John D. Smith, and Andrew T. Duchowski. 2003. Gaze- vs. Hand-based Pointing in Virtual Environments. In Proc. CHI ’03 Extended Abstracts on Human Factors in Computer Systems (CHI ’03), 772–773. https://doi.org/10.1145/765978.765982
    Locate open access versionFindings
  • Heiko Drewes and Albrecht Schmidt. 2007. Interacting with the computer using gaze gestures. In Proc. of the International Conference on Human-computer interaction’07, 475–488. https://doi.org/10.1007/9783-540-74800-7_43
    Locate open access versionFindings
  • Neven A M ElSayed, Bruce H. Thomas, Kim Marriott, Julia Piantadosi, and Ross T. Smith. 2016. Situated Analytics: Demonstrating immersive analytical tools with Augmented Reality. Journal of Visual Languages and Computing 36, C: 13–23. https://doi.org/10.1016/j.jvlc.2016.07.006
    Locate open access versionFindings
  • Barrett Ens, Fraser Anderson, Tovi Grossman, Michelle Annett, Pourang Irani, and George Fitzmaurice. 2017. Ivy: Exploring Spatially Situated Visual Programming for Authoring and Understanding Intelligent Environments. Proceedings of the 43rd Graphics Interface Conference: 156–162. https://doi.org/10.20380/gi2017.20
    Locate open access versionFindings
  • Barrett M. Ens, Rory Finnegan, and Pourang P. Irani. 2014. The personal cockpit: a spatial interface for effective task switching on head-worn displays. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 3171–3180. https://doi.org/10.1145/2556288.2557058
    Locate open access versionFindings
  • Augusto Esteves, David Verweij, Liza Suraiya, Rasal Islam, Youryang Lee, and Ian Oakley. 2017. SmoothMoves: Smooth Pursuits Head Movements for Augmented Reality. In Proceedings of the 30th Annual ACM Symposium on User Interface Software & Technology, 167–178. https://doi.org/10.1145/3126594.3126616
    Locate open access versionFindings
  • Anna Maria Feit, Shane Williams, Arturo Toledo, Ann Paradiso, Harish Kulkarni, Shaun Kane, and Meredith Ringel Morris. 2017. Toward Everyday Gaze Input: Accuracy and Precision of Eye Tracking and Implications for Design. CHI ’17 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems: 1118–1130. https://doi.org/10.1145/3025453.3025599
    Locate open access versionFindings
  • FOVE. FOVE Eye Tracking Virtual Reality Headset. Retrieved September 19, 2017 from https://www.getfove.com/
    Findings
  • Sven-Thomas Graupner and Sebastian Pannasch. 2014. Continuous Gaze Cursor Feedback in Various Tasks: Influence on Eye Movement Behavior, Task Performance and Subjective Distraction.. Springer, Cham, 323–329. https://doi.org/10.1007/978-3-31907857-1_57
    Findings
  • Gyration. Gyration Air Mouse Input Devices. Retrieved September 18, 2017 from https://www.gyration.com/
    Findings
  • Jeremy Hales, David Rozado, and Diako Mardanbegi. 2013. Interacting with Objects in the Environment by Gaze and Hand Gestures. In Proceedings of the 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction, 1–9.
    Google ScholarLocate open access versionFindings
  • Sandra G. Hart and Lowell. E Staveland. 1988. Development of NASA-TLX (Task Load Index): Results of empirical and theoretical research. Advances in Psychology 1: 139–183.
    Google ScholarLocate open access versionFindings
  • Valentin Heun, James Hobin, and Pattie Maes. 2013. Reality editor: programming smarter objects. Proceedings of the 2013 ACM conference on Pervasive and Ubiquitous Computing adjunct publication (UbiComp ’13 Adjunct): 307–310. https://doi.org/10.1145/2494091.2494185
    Locate open access versionFindings
  • Aulikki Hyrskykari, Howell Istance, and Stephen Vickers. 2012. Gaze gestures or dwell-based interaction? In Proceedings of the Symposium on Eye Tracking Research and Applications - ETRA ’12, 229– 232. https://doi.org/10.1145/2168556.2168602
    Locate open access versionFindings
  • Howell Istance, Aulikki Hyrskykari, Lauri Immonen, Santtu Mansikkamaa, and Stephen Vickers. 2010. Designing gaze gestures for gaming: an Investigation of Performance. Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications - ETRA ’10 1, 212: 323. https://doi.org/10.1145/1743666.1743740
    Locate open access versionFindings
  • Rob Jacob and Sophie Stellmach. 2016. Interaction technologies: What you look at is what you get: Gazebased user interfaces. Interactions 23, 5: 62–65. https://doi.org/10.1145/2978577
    Locate open access versionFindings
  • Robert J. K. Jacob. 1990. What you look at is what you get: eye movement-based interaction techniques. Proceedings of the SIGCHI conference on Human factors in computing systems Empowering people CHI ’90: 11–18. https://doi.org/10.1145/97243.97246
    Locate open access versionFindings
  • Richard J. Jagacinski and Donald L. Monk. 1985. Fitts’ Law in two dimensions with hand and head movements. Journal of motor behavior 17, 1: 77–95. https://doi.org/10.1080/00222895.1985.10735338
    Locate open access versionFindings
  • Shahram Jalaliniya, Diako Mardanbegi, and Thomas Pederson. 2015. MAGIC Pointing for Eyewear Computers. Proceedings of the 2015 ACM International Symposium on Wearable Computers ISWC ’15: 155–158. https://doi.org/10.1145/2802083.2802094
    Locate open access versionFindings
  • Shahram Jalaliniya, Diako Mardanbeigi, Thomas Pederson, and Dan Witzner Hansen. 2014. Head and eye movement as pointing modalities for eyewear computers. Proceedings - 11th International Conference on Wearable and Implantable Body Sensor Networks Workshops, BSN Workshops 2014: 50–53. https://doi.org/10.1109/BSN.Workshops.2014.14
    Locate open access versionFindings
  • Moritz Kassner, William Patera, and Andreas Bulling. 2014. Pupil: An Open Source Platform for Pervasive Eye Tracking and Mobile Gaze-based Interaction. In Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing Adjunct Publication - UbiComp ’14 Adjunct, 1151– 1160. https://doi.org/10.1145/2638728.2641695
    Locate open access versionFindings
  • Mohamed Khamis, Axel Hoesl, Alexander Klimczak, Martin Reiss, Florian Alt, and Andreas Bulling. 2017. EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays. In UIST ’17 Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology, 155–166. https://doi.org/10.1145/3126594.3126630
    Locate open access versionFindings
  • Linnéa Larsson, Andrea Schwaller, Marcus Nyström, and Martin Stridh. 2016. Head movement compensation and multi-modal event detection in eyetracking data for unconstrained head movements. Journal of Neuroscience Methods 274: 13–26.
    Google ScholarLocate open access versionFindings
  • Chiuhsiang Joe Lin, Sui Hua Ho, and Yan Jyun Chen. 2015. An investigation of pointing postures in a 3D stereoscopic environment. Applied Ergonomics 48: 154–163. https://doi.org/10.1016/j.apergo.2014.12.001
    Locate open access versionFindings
  • Alfredo Liverani, Giancarlo Amati, and Gianni Caligiana. 2016. A CAD-augmented Reality Integrated Environment for Assembly Sequence Check and Interactive Validation. Concurrent Engineering 12, 1: 67–77. https://doi.org/10.1177/1063293X04042469
    Locate open access versionFindings
  • Rainer Malkewitz. 1998. Head pointing and speech control as a hands-free interface to desktop computing. In Proceedings of the third international ACM conference on Assistive technologies (Proceeding Assets ’98), 182–188.
    Google ScholarLocate open access versionFindings
  • Diako Mardanbegi, Dan Witzner Hansen, and Thomas Pederson. 2012. Eye-based head gestures movements. In Proceedings of the Symposium on Eye Tracking Research and Applications, 139–146.
    Google ScholarLocate open access versionFindings
  • 40. Mark R Mine. 1995. Virtual environment interaction techniques. https://doi.org/10.1.1.38.1750
    Findings
  • 41. Richard A. Monty and John W. Senders. 1976. Eye Movements and Psychological Processes. Lawrence Erlbaum Associates, Hillsdale, NJ.
    Google ScholarFindings
  • 42. Mathieu Nancel, Olivier Chapuis, Emmanuel Pietriga, Xing-Dong Yang, Pourang P. Irani, and Michel Beaudouin-Lafon. 2013. High-precision pointing on large wall displays using small handheld devices. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13, 831. https://doi.org/10.1145/2470654.2470773
    Locate open access versionFindings
  • 43. Tomi Nukarinen, Jari Kangas, Oleg Špakov, Poika Isokoski, Deepak Akkil, Jussi Rantala, and Roope Raisamo. 2016. Evaluation of HeadTurn - An Interaction Technique Using the Gaze and Head Turns. Proceedings of the 9th Nordic Conference on HumanComputer Interaction - NordiCHI ’16: 1–8. https://doi.org/10.1145/2971485.2971490
    Locate open access versionFindings
  • 44. Oculus. Oculus Rift. Retrieved September 19, 2017 from https://www.oculus.com/
    Findings
  • 45. Hyung Min Park, Seok Han Lee, and Jong Soo Choi. 2008. Wearable augmented reality system using gaze interaction. In Proceedings - 7th IEEE International Symposium on Mixed and Augmented Reality 2008, ISMAR 2008, 175–176. https://doi.org/10.1109/ISMAR.2008.4637353
    Locate open access versionFindings
  • 46. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, and Hans Gellersen. 2014. Gaze-touch: Combining Gaze with Multi-touch for Interaction on the Same Surface. In Proceedings of the 27th annual ACM symposium on User interface software and technology - UIST ’14, 509–518. https://doi.org/10.1145/2642918.2647397
    Locate open access versionFindings
  • 47. Ken Pfeuffer, Jason Alexander, Ming Ki Chong, Yanxia Zhang, and Hans Gellersen. 2015. GazeShifting: Direct-Indirect Input with Pen and Touch Modulated by Gaze. Proceedings of the 28th annual ACM symposium on User interface software and technology - UIST ’15: 373–383. https://doi.org/10.1145/2807442.2807460
    Locate open access versionFindings
  • 48. Ken Pfeuffer, Jason Alexander, and Hans Gellersen. 2015. Gaze+ touch vs. touch: what’s the trade-off when using gaze to extend touch to remote displays? In Human-Computer Interaction – INTERACT 2015, 349–367.
    Google ScholarLocate open access versionFindings
  • 49. Ken Pfeuffer and Hans Gellersen. 2016. Gaze and Touch Interaction on Tablets. Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST ’16: 301–311. https://doi.org/10.1145/2984511.2984514
    Locate open access versionFindings
  • 50. Thammathip Piumsomboon, Gun Lee, Robert W. Lindeman, and Mark Billinghurst. 2017. Exploring natural eye-gaze-based interaction for immersive virtual reality. 2017 IEEE Symposium on 3D User Interfaces, 3DUI 2017 - Proceedings: 36–39. https://doi.org/10.1109/3DUI.2017.7893315
    Locate open access versionFindings
  • 51. Pupil Labs. Pupil Labs. Retrieved September 19, 2017 from https://pupil-labs.com/
    Findings
  • 52. Yuanyuan Qian and Robert J Teather. 2017. The eyes don’t have It: An empirical comparison of head-based and eye-based selection in virtual reality. In Proceedings of the ACM Symposium on Spatial User Interaction, 91–98.
    Google ScholarLocate open access versionFindings
  • 53. Marcos Serrano, Barrett Ens, Xing-Dong Yang, and Pourang Irani. 2015. Gluey: Developing a Head-Worn Display Interface to Unify the Interaction Experience in Distributed Display Environments. In Proceedings of the 17th International Conference on HumanComputer Interaction with Mobile Devices and Services - MobileHCI ’15, 161–171. https://doi.org/10.1145/2785830.2785838
    Locate open access versionFindings
  • 54. Linda E. Sibert and Robert J. K. Jacob. 2000. Evaluation of eye gaze interaction. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’00: 281–288. https://doi.org/10.1145/332040.332445
    Locate open access versionFindings
  • 55. Nikolaos Sidorakis, George Alex Koulieris, and Katerina Mania. 2015. Binocular eye-tracking for the control of a 3D immersive multimedia user interface. In 2015 IEEE 1st Workshop on Everyday Virtual Reality, WEVR 2015, 15–18. https://doi.org/10.1109/WEVR.2015.7151689
    Locate open access versionFindings
  • 56. Oleg Špakov, Poika Isokoski, and Päivi Majaranta. 2014. Look and Lean: Accurate Head-Assisted Eye Pointing. Proceedings of the ETRA conference 1, 212: 35–42. https://doi.org/10.1145/2578153.2578157
    Locate open access versionFindings
  • 57. Oleg Špakov and Päivi Majaranta. 2012. Enhanced Gaze Interaction Using Simple Head Gestures. In Proceedings of the 2012 ACM Conference on Ubiquitous Computing - UbiComp ’12, 705–710. https://doi.org/10.1145/2370216.2370369
    Locate open access versionFindings
  • 58. Sophie Stellmach and Raimund Dachselt. 2012. Look & Touch: Gaze-supported Target Acquisition. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 2981–2990. https://doi.org/10.1145/2207676.2208709
    Locate open access versionFindings
  • 59. Sophie Stellmach and Raimund Dachselt. 2013. Still Looking: Investigating Seamless Gaze-supported Selection, Positioning, and Manipulation of Distant Targets. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems - CHI ’13, 285– 294. https://doi.org/10.1145/2470654.2470695
    Locate open access versionFindings
  • 60. Yusuke Sugano and Andreas Bulling. 2015. SelfCalibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency. In Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST ’15)., 363–372. https://doi.org/10.1145/2807442.2807445
    Locate open access versionFindings
  • 61. Vildan Tanriverdi and Robert J. K. Jacob. 2000. Interacting with eye movements in virtual environments. Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’00 2, 1: 265–272. https://doi.org/10.1145/332040.332443
    Locate open access versionFindings
  • 62. Boris Velichkovsky, Andreas Sprenger, and Pieter Unema. 1997. Towards Gaze-Mediated Interaction Collecting Solutions of the “Midas touch problem.” Proceedings of the International Conference on Human-Computer Interaction (INTERACT’97): 509– 516. https://doi.org/10.1007/978-0-387-35175-9_77
    Locate open access versionFindings
  • 63. Eduardo Velloso, Markus Wirth, Christian Weichel, Augusto Esteves, and Hans Gellersen. 2016. AmbiGaze: Direct Control of Ambient Devices by Gaze. Proceedings of the 2016 ACM Conference on Designing Interactive Systems - DIS ’16: 812–817. https://doi.org/10.1145/2901790.2901867
    Locate open access versionFindings
  • 64. Melodie Mélodie Vidal, Andreas Bulling, and Hans Gellersen. 2013. Pursuits: Spontaneous interaction with displays based on smooth pursuit eye movement and moving targets. In Proceedings of the 2013 ACM International Joint Conference on Pervasive and Ubiquitous Computing, 439–448. https://doi.org/10.1145/2493432.2493477
    Locate open access versionFindings
  • 65. X. Glass. Retrieved September 19, 2017 from https://x.company/glass/
    Findings
  • 66. Shumin Zhai, Carlos Morimoto, and Steven Ihde. 1999. Manual and gaze input cascaded (MAGIC) pointing. In Proceedings of the SIGCHI conference on Human factors in computing systems - CHI ’99, 246–253. https://doi.org/10.1145/302979.303053
    Locate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2018
Tags
Comments