Aggregation Techniques in Crowdsourcing: Multiple Choice Questions and Beyond

Conference on Information and Knowledge Management(2021)

Cited 3|Views22
No score
Abstract
ABSTRACTCrowdsourcing has been leveraged in various tasks and applications, primarily to gather information from human annotators in exchange for a monetary reward. The main challenge associated with crowdsourcing is the low quality of the results, which can stem from multiple reasons, including bias, error, and adversarial behavior. Researchers and practitioners can apply quality control methods to prevent and detect low-quality responses. For example, worker selection methods utilize qualifications and attention check questions before assigning a task. Similarly, task routing identifies the workers who can provide a more accurate response to a given task type using recommender system techniques. In practice, posterior quality control methods are the most common approach to deal with noisy labels once they are obtained. Such methods require task repetition, i.e., assigning the task to multiple crowd-workers, followed by an aggregation mechanism (aka truth inference) to select the most likely answer or request an additional label. A large number of techniques have been proposed for crowdsourcing aggregation covering several types of task types. This tutorial aims to present common and recent label aggregation techniques for multiple-choice questions, multi-class labels, ratings, pairwise comparison, and image/text annotation. We believe that the audience will benefit from the focus on this specific research area to learn about the best techniques to apply in their crowdsourcing projects.
More
Translated text
Key words
crowdsourcing,multiple choice questions
AI Read Science
Must-Reading Tree
Example
Generate MRT to find the research sequence of this paper
Chat Paper
Summary is being generated by the instructions you defined