Adding Gesture, Posture and Facial Displays to the PoliModal Corpus of Political Interviews.

LREC(2020)

引用 0|浏览28
暂无评分
摘要
This paper introduces a multimodal corpus in the political domain, which on top of transcribed face-to-face interviews presents the annotation of facial displays, hand gestures and body posture. While the fully annotated corpus consists of 3 interviews for a total of 120 minutes, it is extracted from a larger available corpus of 56 face-to-face interviews (14 hours) that has been manually annotated with information about metadata (i.e. tools used for the transcription, link to the interview etc.), pauses (used to mark a pause either between or within utterances), vocal expressions (marking non-lexical expressions such as burp and semi-lexical expressions such as primary interjections), deletions (false starts, repetitions and truncated words) and overlaps. In this work, we describe the additional level of annotation relating to non-verbal elements used by three Italian politicians belonging to three different political parties and who at the time of the talk-show were all candidates for the presidency of the Council of Ministers. We also present the results of some analyses aimed at identifying existing relations between the proxemic phenomena and the linguistic structures in which they occur, in order to capture recurring patterns and differences in the communication strategy.
更多
查看译文
关键词
multimodal corpora, political communication, multi-layered annotation
AI 理解论文
溯源树
样例
生成溯源树,研究论文发展脉络
Chat Paper
正在生成论文摘要