Geppetto - Enabling Semantic Design of Expressive Robot Behaviors

CHI, pp. 3692019.

Cited by: 0|Bibtex|Views59|Links
EI
Keywords:
expressive robots robots semantic design semantic editing
Weibo:
Towards increasing the accessibility of robot behavior design, we presented a simulation-driven and crowd-powered system that enabled semantic design of robot motions

Abstract:

Expressive robots are useful in many contexts, from industrial to entertainment applications. However, designing expressive robot behaviors requires editing a large number of unintuitive control parameters. We present an interactive, data-driven system that allows editing of these complex parameters in a semantic space. Our system combine...More

Code:

Data:

0
Introduction
  • As robots become more prevalent in human environments, from factory foors to personal homes, enabling robots to express themselves can enhance and enrich the experiences and interactions with them.
  • A robotic arm that collaborates with human workers on a factory foor could communicate its confusion about a task, or alert human workers if needed, by moving in a specifc manner
  • Creating such expressive behaviors for robots is highly challenging [8].
  • Apart from the inherent task complexity and domain knowledge requirements, robot behavior design sufers from a lack of suitable design tools
  • Existing animation tools such as Blender [7] and Maya [3] enable design with absolute human control but ofer limited options for integration with physical hardware.
  • The authors' goal is to facilitate easy and intuitive design of expressive movements for robotic systems over a wide variety of applications ranging from art to social interactions
Highlights
  • As robots become more prevalent in human environments, from factory foors to personal homes, enabling robots to express themselves can enhance and enrich our experiences and interactions with them
  • A robotic arm that collaborates with human workers on a factory foor could communicate its confusion about a task, or alert human workers if needed, by moving in a specifc manner
  • The perceptual quality of emotional expression in the user-created motion designs is evaluated using crowdsourcing, with the top and bottom 5 synthesized designs for each category included in the tournament
  • These top and bottom-most synthesized designs were chosen based on their prior crowdsourcing scores
  • Towards increasing the accessibility of robot behavior design, we presented a simulation-driven and crowd-powered system that enabled semantic design of robot motions
  • We hope that our work will lead to the development of additional tools that allow both novices and experts to create desirable robots, and more broadly, open the door to future investigations of mixed-initiative interfaces across all domains of design
Methods
  • 12 participants (9 males, 20-35 years of age) were recruited. Participants were reimbursed $25 USD for their time.
Results
  • The perceptual quality of emotional expression in the user-created motion designs is evaluated using crowdsourcing, with the top and bottom 5 synthesized designs for each category included in the tournament.
  • The authors analyze corresponding crowdsourcing scores using confdence intervals and efect sizes, instead of null hypothesis signifcance testing [14]
  • This choice is inspired by increasing concerns over such hypothesis testing for experimental results in various research felds [13, 17, 48]
Conclusion
  • Geppetto allows design space exploration and editing given a single high-level semantic goal.
  • Ondemand sampling at design time may enable Geppetto to provide guidance based on user-preferences.Towards increasing the accessibility of robot behavior design, the authors presented a simulation-driven and crowd-powered system that enabled semantic design of robot motions.
  • Despite the subjectivity of the task, the system enables an intuitive design experience with the help of data-driven guidance and design space exploration, as demonstrated by the user study.
  • The authors hope that the work will lead to the development of additional tools that allow both novices and experts to create desirable robots, and more broadly, open the door to future investigations of mixed-initiative interfaces across all domains of design
Summary
  • Introduction:

    As robots become more prevalent in human environments, from factory foors to personal homes, enabling robots to express themselves can enhance and enrich the experiences and interactions with them.
  • A robotic arm that collaborates with human workers on a factory foor could communicate its confusion about a task, or alert human workers if needed, by moving in a specifc manner
  • Creating such expressive behaviors for robots is highly challenging [8].
  • Apart from the inherent task complexity and domain knowledge requirements, robot behavior design sufers from a lack of suitable design tools
  • Existing animation tools such as Blender [7] and Maya [3] enable design with absolute human control but ofer limited options for integration with physical hardware.
  • The authors' goal is to facilitate easy and intuitive design of expressive movements for robotic systems over a wide variety of applications ranging from art to social interactions
  • Methods:

    12 participants (9 males, 20-35 years of age) were recruited. Participants were reimbursed $25 USD for their time.
  • Results:

    The perceptual quality of emotional expression in the user-created motion designs is evaluated using crowdsourcing, with the top and bottom 5 synthesized designs for each category included in the tournament.
  • The authors analyze corresponding crowdsourcing scores using confdence intervals and efect sizes, instead of null hypothesis signifcance testing [14]
  • This choice is inspired by increasing concerns over such hypothesis testing for experimental results in various research felds [13, 17, 48]
  • Conclusion:

    Geppetto allows design space exploration and editing given a single high-level semantic goal.
  • Ondemand sampling at design time may enable Geppetto to provide guidance based on user-preferences.Towards increasing the accessibility of robot behavior design, the authors presented a simulation-driven and crowd-powered system that enabled semantic design of robot motions.
  • Despite the subjectivity of the task, the system enables an intuitive design experience with the help of data-driven guidance and design space exploration, as demonstrated by the user study.
  • The authors hope that the work will lead to the development of additional tools that allow both novices and experts to create desirable robots, and more broadly, open the door to future investigations of mixed-initiative interfaces across all domains of design
Related work
  • This work builds upon prior work on semantic editing, crowdpowered editing, and robot motion design.

    Semantic Editing and Design Space Exploration

    Editing using semantic or context-specifc attributes has been explored for many complex design domains such as 3D models [9, 63], images [28, 32, 44], and fonts [41]. Each of these approaches extract relevant and human-understandable attributes for their design domain, and learn a mapping between the design parameters and these attributes. With this mapping, they enable intuitive, attribute-based editing at design time. We wish to extend this methodology to the domain of robotics. Unlike the domain of 3D models and images, there is no existing large dataset of expressive robot motions. We therefore parameterize and synthesize a wide variety of such motions using a physics-based simulation.
Funding
  • Data-driven system that allows editing of these complex parameters in a semantic space
  • Demonstrates our system in the context of designing emotionally expressive behaviors
  • Presents Geppetto, a simulation-driven robot motion design system that enables the design of expressive behaviors using high-level and semantic descriptions of behavior properties
Reference
  • Kaat Alaerts, Evelien Nackaerts, Pieter Meyns, Stephan P Swinnen, and Nicole Wenderoth. 201Action and emotion recognition from point light displays: an investigation of gender diferences. PloS one 6, 6 (2011), e20989.
    Google ScholarLocate open access versionFindings
  • Deepali Aneja, Alex Colburn, Gary Faigin, Linda Shapiro, and Barbara Mones. 2016. Modeling Stylized Character Expressions via Deep Learning. In Asian Conference on Computer Vision. Springer, 136–153.
    Google ScholarLocate open access versionFindings
  • Autodesk 2018. Autodesk Maya. Autodesk. https://www.autodesk.com/products/maya/overview.
    Locate open access versionFindings
  • Connelly Barnes, David E Jacobs, Jason Sanders, Dan B Goldman, Szymon Rusinkiewicz, Adam Finkelstein, and Maneesh Agrawala. 2008. Video puppetry: a performative interface for cutout animation. In ACM Transactions on Graphics (TOG), Vol. 27. ACM, 124.
    Google ScholarLocate open access versionFindings
  • Lyn Bartram and Ai Nakatani. 2010. What makes motion meaningful? Afective properties of abstract motion. In Image and Video Technology (PSIVT), 2010 Fourth Pacifc-Rim Symposium on. IEEE, 468–474.
    Google ScholarLocate open access versionFindings
  • Aude Billard, Sylvain Calinon, Ruediger Dillmann, and Stefan Schaal.
    Google ScholarFindings
  • 2008. Robot programming by demonstration. In Springer handbook of robotics. Springer, 1371–1394.
    Google ScholarFindings
  • [7] Blender Foundation 201Blender. Blender Foundation. https: //www.blender.org/.
    Findings
  • [8] Cynthia Breazeal, Atsuo Takanishi, and Tetsunori Kobayashi. 2008.
    Google ScholarFindings
  • [9] Siddhartha Chaudhuri, Evangelos Kalogerakis, Stephen Giguere, and Thomas Funkhouser. 2013. Attribit: content creation with semantic attributes. In Proceedings of the 26th annual ACM symposium on User interface software and technology. ACM, 193–202.
    Google ScholarLocate open access versionFindings
  • [10] Paul Christiano, Jan Leike, Tom B Brown, Miljan Martic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. arXiv preprint arXiv:1706.03741 (2017).
    Findings
  • [11] Loïc Ciccone, Martin Guay, Maurizio Nitti, and Robert W Sumner.
    Google ScholarFindings
  • 2017. Authoring motion cycles. In Proceedings of the ACM SIG-
    Google ScholarLocate open access versionFindings
  • [12] László Csató. 2013. Ranking by pairwise comparisons for Swiss-system tournaments. Central European Journal of Operations Research 21, 4 (2013), 783–803.
    Google ScholarLocate open access versionFindings
  • [13] Geof Cumming. 2014. The new statistics: Why and how. Psychological science 25, 1 (2014), 7–29.
    Google ScholarLocate open access versionFindings
  • [14] Geof Cumming and Sue Finch. 2005. Inference by eye: confdence intervals and how to read pictures of data. American Psychologist 60, 2 (2005), 170.
    Google ScholarLocate open access versionFindings
  • [15] Sébastien Dalibard, Nadia Magnenat-Talmann, and Daniel Thalmann.
    Google ScholarFindings
  • 2012. Anthropomorphism of artifcial agents: a comparative survey of expressive design and motion of virtual Characters and Social Robots. In Workshop on Autonomous Social Robots and Virtual Humans at the 25th Annual Conference on Computer Animation and Social Agents (CASA 2012).
    Google ScholarLocate open access versionFindings
  • [16] Disney 2003. https://disneyparks.disney.go.com/blog/2013/08/
    Findings
  • [17] Pierre Dragicevic, Fanny Chevalier, and Stephane Huot. 2014. Running an HCI experiment in multiple parallel universes. In CHI’14 Extended
    Google ScholarLocate open access versionFindings
  • [18] Abhimanyu Dubey, Nikhil Naik, Devi Parikh, Ramesh Raskar, and César A Hidalgo. 2016. Deep learning the city: Quantifying urban perception at a global scale. In European Conference on Computer Vision.
    Google ScholarLocate open access versionFindings
  • [19] Magda Dubois, Josep-Arnau Claret, Luis Basañez, and Gentiane Venture. 2016. Infuence of Emotional Motions in Human-Robot Interactions. In International Symposium on Experimental Robotics. Springer, 799–808.
    Google ScholarLocate open access versionFindings
  • [20] Paul Ekman and Wallace V Friesen. 1971. Constants across cultures in the face and emotion. Journal of personality and social psychology 17, 2 (1971), 124.
    Google ScholarLocate open access versionFindings
  • [21] Victor Emeli. 2012. Robot learning through social media crowdsourcing. In Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International
    Google ScholarLocate open access versionFindings
  • [22] Vittorio Gallese, Christian Keysers, and Giacomo Rizzolatti. 2004. A unifying view of the basis of social cognition. Trends in cognitive sciences 8, 9 (2004), 396–403.
    Google ScholarLocate open access versionFindings
  • [23] Madeline Gannon. 2017. Human-Centered Interfaces for Autonomous
    Google ScholarFindings
  • [24] Oliver Glauser, Wan-Chun Ma, Daniele Panozzo, Alec Jacobson, Otmar Hilliges, and Olga Sorkine-Hornung. 2016. Rig animation with a tangible and modular input device. ACM Transactions on Graphics (TOG) 35, 4 (2016), 144.
    Google ScholarLocate open access versionFindings
  • [25] John Harris and Ehud Sharlin. 2011. Exploring the afect of abstract motion in social human-robot interaction. In RO-MAN, 2011 IEEE. IEEE, 441–448.
    Google ScholarLocate open access versionFindings
  • [26] Malte F Jung. 2017. Afective Grounding in Human-Robot Interaction. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 263–273.
    Google ScholarLocate open access versionFindings
  • [27] Heather Knight and Reid Simmons. 2016. Laban head-motions convey robot state: A call for robot body language. In Robotics and Automation (ICRA), 2016 IEEE International Conference on. IEEE, 2881–2888.
    Google ScholarLocate open access versionFindings
  • [28] Adriana Kovashka, Devi Parikh, and Kristen Grauman. 2012. Whittlesearch: Image search with relative attribute feedback. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on. IEEE, 2973–2980.
    Google ScholarLocate open access versionFindings
  • [29] Yuki Koyama and Masataka Goto. 2018. OptiMo: Optimization-Guided Motion Editing for Keyframe Character Animation. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 161.
    Google ScholarLocate open access versionFindings
  • [30] Yuki Koyama, Daisuke Sakamoto, and Takeo Igarashi. 2014. Crowdpowered parameter analysis for visual design exploration. In Proceedings of the 27th annual ACM symposium on User interface software and technology. ACM, 65–74.
    Google ScholarLocate open access versionFindings
  • [31] KUKA 2017. Industrial robots. KUKA. https://www.kuka.com/en-us/products/robotics-systems/industrial-robots.
    Locate open access versionFindings
  • [32] Pierre-Yves Lafont, Zhile Ren, Xiaofeng Tao, Chao Qian, and James Hays. 2014. Transient attributes for high-level understanding and editing of outdoor scenes. ACM Transactions on Graphics (TOG) 33, 4 (2014), 149.
    Google ScholarLocate open access versionFindings
  • [33] Brian Lee, Savil Srivastava, Ranjitha Kumar, Ronen Brafman, and Scott R Klemmer. 2010. Designing with interactive example galleries. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, 2257–2266.
    Google ScholarLocate open access versionFindings
  • [34] Yueh-Hung Lin, Chia-Yang Liu, Hung-Wei Lee, Shwu-Lih Huang, and Tsai-Yen Li. 2009. Evaluating emotive character animations created with procedural animation. In Intelligent Virtual Agents. Springer, 308– 315.
    Google ScholarFindings
  • [35] Justin Matejka, Michael Glueck, Erin Bradner, Ali Hashemi, Tovi Grossman, and George Fitzmaurice. 2018. Dream Lens: Exploration and Visualization of Large-Scale Generative Design Datasets. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. ACM, 369.
    Google ScholarLocate open access versionFindings
  • [36] Vittorio Megaro, Bernhard Thomaszewski, Maurizio Nitti, Otmar Hilliges, Markus Gross, and Stelian Coros. 2015. Interactive design of 3D-printable robotic creatures. ACM Transactions on Graphics (TOG) 34, 6 (2015), 216.
    Google ScholarLocate open access versionFindings
  • [37] Microsoft Research 2017. TrueSkill. Microsoft Research. http://trueskill.org/.
    Locate open access versionFindings
  • [38] Brian K Mok, Stephen Yang, David Sirkin, and Wendy Ju. 2014. Empathy: interactions with emotive robotic drawers. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. ACM, 250–251.
    Google ScholarLocate open access versionFindings
  • [39] Nikhil Naik, Jade Philipoom, Ramesh Raskar, and César Hidalgo. 2014. Streetscore-predicting the perceived safety of one million streetscapes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops. 779–785.
    Google ScholarLocate open access versionFindings
  • [40] Jorge Nocedal and Stephen J Wright. 2006. Numerical optimization. (2006).
    Google ScholarLocate open access versionFindings
  • [41] Peter O’Donovan, Janis Lıbeks, Aseem Agarwala, and Aaron Hertzmann. 2014. Exploratory font selection using crowdsourced attributes. ACM Transactions on Graphics (TOG) 33, 4 (2014), 92.
    Google ScholarLocate open access versionFindings
  • [42] Open Source Robotics Foundation 2018. ROS. Open Source Robotics Foundation. http://www.ros.org/.
    Findings
  • [43] Wei Pan and Lorenzo Torresani. 2009. Unsupervised hierarchical modeling of locomotion styles. In Proceedings of the 26th Annual International Conference on Machine Learning. ACM, 785–792.
    Google ScholarLocate open access versionFindings
  • [44] Devi Parikh and Kristen Grauman. 2011. Relative attributes. In Computer Vision (ICCV), 2011 IEEE International Conference on. IEEE, 503– 510.
    Google ScholarLocate open access versionFindings
  • [45] Craig W Reynolds. 1987. Flocks, herds and schools: A distributed behavioral model. ACM SIGGRAPH computer graphics 21, 4 (1987), 25–34.
    Google ScholarLocate open access versionFindings
  • [46] Tiago Ribeiro and Ana Paiva. 2012. The illusion of robotic life: principles and practices of animation for robots. In Human-Robot Interaction (HRI), 2012 7th ACM/IEEE International Conference on. IEEE, 383–390.
    Google ScholarLocate open access versionFindings
  • [47] Martin Saerbeck and Christoph Bartneck. 2010. Perception of afect elicited by robot motion. In Human-Robot Interaction (HRI), 2010 5th ACM/IEEE International Conference on. IEEE, 53–60.
    Google ScholarLocate open access versionFindings
  • [48] Frank L Schmidt. 2013. Eight Common But False Objections to the Discontinuation of Signifcance Testing in the. What if there were no signifcance tests? (2013), 37.
    Google ScholarFindings
  • [49] Ana Serrano, Diego Gutierrez, Karol Myszkowski, Hans-Peter Seidel, and Belen Masia. 2016. An intuitive control space for material appearance. ACM Transactions on Graphics (TOG) 35, 6 (2016), 186.
    Google ScholarLocate open access versionFindings
  • [50] Ari Shapiro, Yong Cao, and Petros Faloutsos. 2006. Style components. In Proceedings of Graphics Interface 2006. Canadian Information Processing Society, 33–39.
    Google ScholarLocate open access versionFindings
  • [51] Ronit Slyper, Guy Hofman, and Ariel Shamir. 2015. Mirror Puppeteering: Animating Toy Robots in Front of a Webcam. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction. ACM, 241–248.
    Google ScholarLocate open access versionFindings
  • [52] Sichao Song and Seiji Yamada. 2017. Expressing emotions through color, sound, and vibration with an appearance-constrained social robot. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 2–11.
    Google ScholarLocate open access versionFindings
  • [53] Sony 2017. Aibo. Sony. http://www.sony-aibo.com/.
    Findings
  • [54] Yuyin Sun and Dieter Fox. 2016. Neol: Toward never-ending object learning for robots. In Robotics and Automation (ICRA), 2016 IEEE
    Google ScholarLocate open access versionFindings
  • [55] Daniel Szafr, Bilge Mutlu, and Terrence Fong. 2014. Communication of intent in assistive free fyers. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. ACM, 358–365.
    Google ScholarLocate open access versionFindings
  • [56] Leila Takayama, Doug Dooley, and Wendy Ju. 2011. Expressing thought: improving robot readability with animation principles. In Proceedings of the 6th international conference on Human-robot interaction. ACM, 69–76.
    Google ScholarLocate open access versionFindings
  • [57] Haodan Tan, John Tiab, Selma Šabanović, and Kasper Hornbæk. 2016. Happy Moves, Sad Grooves: Using Theories of Biological Motion and Afect to Design Shape-Changing Interfaces. In Proceedings of the 2016 ACM Conference on Designing Interactive Systems. ACM, 1282–1293.
    Google ScholarLocate open access versionFindings
  • [58] Frank Thomas, Ollie Johnston, and Frank. Thomas. 1995. The illusion of life: Disney animation. Hyperion New York.
    Google ScholarFindings
  • [59] Lauren Vasey, Tovi Grossman, Heather Kerrick, and Danil Nagy. 2016. The hive: a human and robot collaborative building process. In ACM SIGGRAPH 2016 Talks. ACM, 83.
    Google ScholarLocate open access versionFindings
  • [60] Gentiane Venture, Hideki Kadone, Tianxiang Zhang, Julie Grèzes, Alain Berthoz, and Halim Hicheur. 2014. Recognizing emotions conveyed by human gait. International Journal of Social Robotics 6, 4 (2014), 621–632.
    Google ScholarLocate open access versionFindings
  • [61] Jue Wang, Steven M Drucker, Maneesh Agrawala, and Michael F Cohen. 2006. The cartoon animation flter. In ACM Transactions on Graphics (TOG), Vol. 25. ACM, 1169–1173.
    Google ScholarLocate open access versionFindings
  • [62] Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2011. A survey of crowdsourcing systems. In Privacy, Security, Risk and Trust (PASSAT) and 2011 IEEE Third Inernational Conference on Social Computing (SocialCom), 2011 IEEE Third International Conference on. IEEE, 766–773.
    Google ScholarLocate open access versionFindings
  • [63] Mehmet Ersin Yumer, Siddhartha Chaudhuri, Jessica K Hodgins, and Levent Burak Kara. 2015. Semantic shape editing using deformation handles. ACM Transactions on Graphics (TOG) 34, 4 (2015), 86.
    Google ScholarLocate open access versionFindings
  • [64] Allan Zhou, Dylan Hadfeld-Menell, Anusha Nagabandi, and Anca D Dragan. 2017. Expressive robot motion timing. In Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction. ACM, 22–31.
    Google ScholarLocate open access versionFindings
Your rating :
0

 

Best Paper
Best Paper of CHI, 2019
Tags
Comments