Sciweavers

3841 search results - page 6 / 769
» Crowdsourcing for search evaluation
Sort
View
RECSYS
2010
ACM
13 years 7 months ago
Global budgets for local recommendations
We present the design, implementation and evaluation of a new geotagging service, Gloe, that makes it easy to find, rate and recommend arbitrary on-line content in a mobile settin...
Thomas Sandholm, Hang Ung, Christina Aperjis, Bern...
CSCW
2011
ACM
13 years 2 months ago
Designing incentives for inexpert human raters
The emergence of online labor markets makes it far easier to use individual human raters to evaluate materials for data collection and analysis in the social sciences. In this pap...
Aaron D. Shaw, John J. Horton, Daniel L. Chen
UIST
2010
ACM
13 years 5 months ago
Soylent: a word processor with a crowd inside
This paper introduces architectural and interaction patterns for integrating crowdsourced human contributions directly into user interfaces. We focus on writing and editing, compl...
Michael S. Bernstein, Greg Little, Robert C. Mille...
SIGIR
2012
ACM
11 years 9 months ago
Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
In crowdsourced relevance judging, each crowd worker typically judges only a small number of examples, yielding a sparse and imbalanced set of judgments in which relatively few wo...
Hyun Joon Jung, Matthew Lease
MOBISYS
2010
ACM
13 years 9 months ago
CrowdSearch: exploiting crowds for accurate real-time image search on mobile phones
Mobile phones are becoming increasingly sophisticated with a rich set of on-board sensors and ubiquitous wireless connectivity. However, the ability to fully exploit the sensing c...
Tingxin Yan, Vikas Kumar, Deepak Ganesan