Sciweavers

3841 search results - page 5 / 769
» Crowdsourcing for search evaluation
Sort
View
AAAI
2012
11 years 9 months ago
Quality Expectation-Variance Tradeoffs in Crowdsourcing Contests
We examine designs for crowdsourcing contests, where participants compete for rewards given to superior solutions of a task. We theoretically analyze tradeoffs between the expecta...
Xi Alice Gao, Yoram Bachrach, Peter Key, Thore Gra...
AAAI
2012
11 years 9 months ago
Online Task Assignment in Crowdsourcing Markets
We explore the problem of assigning heterogeneous tasks to workers with different, unknown skill sets in crowdsourcing markets such as Amazon Mechanical Turk. We first formalize ...
Chien-Ju Ho, Jennifer Wortman Vaughan
CHI
2008
ACM
14 years 7 months ago
Crowdsourcing user studies with Mechanical Turk
User studies are important for many aspects of the design process and involve techniques ranging from informal surveys to rigorous laboratory studies. However, the costs involved ...
Aniket Kittur, Ed H. Chi, Bongwon Suh
IUI
2012
ACM
12 years 2 months ago
Performance comparisons of phrase sets and presentation styles for text entry evaluations
We empirically compare five different publicly-available phrase sets in two large-scale (N = 225 and N = 150) crowdsourced text entry experiments. We also investigate the impact ...
Per Ola Kristensson, Keith Vertanen
CLEF
2011
Springer
12 years 7 months ago
Overview of the 2nd International Competition on Wikipedia Vandalism Detection
Abstract The paper overviews the vandalism detection task of the PAN’11 competition. A new corpus is introduced which comprises about 30 000 Wikipedia edits in the languages Engl...
Martin Potthast, Teresa Holfeld