Sciweavers

WACV
2012
IEEE
12 years 8 months ago
Tools for richer crowd source image annotations
Crowd-sourcing tools such as Mechanical Turk are popular for annotation of large scale image data sets. Typically, these annotations consist of bounding boxes or coarse outlines o...
Joshua Little, Austin Abrams, Robert Pless
ECIR
2011
Springer
13 years 4 months ago
A Methodology for Evaluating Aggregated Search Results
Aggregated search is the task of incorporating results from different specialized search services, or verticals, into Web search results. While most prior work focuses on deciding...
Jaime Arguello, Fernando Diaz, Jamie Callan, Ben C...
CIDR
2011
228views Algorithms» more  CIDR 2011»
13 years 4 months ago
Crowdsourced Databases: Query Processing with People
Amazon’s Mechanical Turk (“MTurk”) service allows users to post short tasks (“HITs”) that other users can receive a small amount of money for completing. Common tasks on...
Adam Marcus 0002, Eugene Wu 0002, Samuel Madden, R...
CHI
2011
ACM
13 years 4 months ago
Social media ownership: using twitter as a window onto current attitudes and beliefs
Social media, by its very nature, introduces questions about ownership. Ownership comes into play most crucially when we investigate how social media is saved or archived; how it ...
Catherine C. Marshall, Frank M. Shipman III
UIST
2010
ACM
13 years 10 months ago
TurKit: human computation algorithms on mechanical turk
Mechanical Turk provides an on-demand source of human computation. This provides a tremendous opportunity to explore algorithms which incorporate human computation as a function c...
Greg Little, Lydia B. Chilton, Max Goldman, Robert...
NAACL
2010
13 years 10 months ago
Cheap, Fast and Good Enough: Automatic Speech Recognition with Non-Expert Transcription
Deploying an automatic speech recognition system with reasonable performance requires expensive and time-consuming in-domain transcription. Previous work demonstrated that non-pro...
Scott Novotney, Chris Callison-Burch
EMNLP
2008
14 years 1 months ago
Cheap and Fast - But is it Good? Evaluating Non-Expert Annotations for Natural Language Tasks
Human linguistic annotation is crucial for many natural language processing tasks but can be expensive and time-consuming. We explore the use of Amazon's Mechanical Turk syst...
Rion Snow, Brendan O'Connor, Daniel Jurafsky, Andr...
CHI
2010
ACM
14 years 5 months ago
Exploring iterative and parallel human computation processes
Services like Amazon’s Mechanical Turk have opened the door for exploration of processes that outsource computation to humans. These human computation processes hold tremendous ...
Greg Little
KDD
2009
ACM
183views Data Mining» more  KDD 2009»
14 years 7 months ago
Financial incentives and the "performance of crowds"
The relationship between financial incentives and performance, long of interest to social scientists, has gained new relevance with the advent of web-based “crowd-sourcing” mo...
Winter A. Mason, Duncan J. Watts
CHI
2010
ACM
14 years 7 months ago
Are your participants gaming the system?: screening mechanical turk workers
In this paper we discuss a screening process used in conjunction with a survey administered via Amazon.com’s Mechanical Turk. We sought an easily implementable method to disqual...
Julie S. Downs, Mandy B. Holbrook, Steve Sheng, Lo...