Sciweavers

292 search results - page 39 / 59
» Predicting labels for dyadic data
Sort
View
MLDM
2007
Springer
14 years 4 months ago
Transductive Learning from Relational Data
Transduction is an inference mechanism “from particular to particular”. Its application to classification tasks implies the use of both labeled (training) data and unlabeled (...
Michelangelo Ceci, Annalisa Appice, Nicola Barile,...
KDD
2008
ACM
137views Data Mining» more  KDD 2008»
14 years 10 months ago
Learning classifiers from only positive and unlabeled data
The input to an algorithm that learns a binary classifier normally consists of two sets of examples, where one set consists of positive examples of the concept to be learned, and ...
Charles Elkan, Keith Noto
AAAI
2010
13 years 11 months ago
G-Optimal Design with Laplacian Regularization
In many real world applications, labeled data are usually expensive to get, while there may be a large amount of unlabeled data. To reduce the labeling cost, active learning attem...
Chun Chen, Zhengguang Chen, Jiajun Bu, Can Wang, L...
BMCBI
2008
162views more  BMCBI 2008»
13 years 10 months ago
Improved identification of conserved cassette exons using Bayesian networks
Background: Alternative splicing is a major contributor to the diversity of eukaryotic transcriptomes and proteomes. Currently, large scale detection of alternative splicing using...
Rileen Sinha, Michael Hiller, Rainer Pudimat, Ulri...
ICML
2004
IEEE
14 years 10 months ago
Kernel conditional random fields: representation and clique selection
Kernel conditional random fields (KCRFs) are introduced as a framework for discriminative modeling of graph-structured data. A representer theorem for conditional graphical models...
John D. Lafferty, Xiaojin Zhu, Yan Liu