Sciweavers

ICANN
2005
Springer
14 years 5 months ago
Model Selection Under Covariate Shift
A common assumption in supervised learning is that the training and test input points follow the same probability distribution. However, this assumption is not fulfilled, e.g., in...
Masashi Sugiyama, Klaus-Robert Müller
COLT
2005
Springer
14 years 5 months ago
Analysis of Perceptron-Based Active Learning
We start by showing that in an active learning setting, the Perceptron algorithm needs Ω( 1 ε2 ) labels to learn linear separators within generalization error ε. We then prese...
Sanjoy Dasgupta, Adam Tauman Kalai, Claire Montele...
IJCNN
2006
IEEE
14 years 5 months ago
Statistical Mechanics of Online Learning for Ensemble Teachers
— We analyze the generalization performance of a student in a model composed of linear perceptrons: a true teacher, ensemble teachers, and the student. Calculating the generaliza...
Seiji Miyoshi, Masato Okada
IJCNN
2006
IEEE
14 years 5 months ago
Particle Swarm Optimization of Fuzzy ARTMAP Parameters
— In this paper a Particle Swarm Optimization (PSO)-based training strategy is introduced for fuzzy ARTMAP that minimizes generalization error while optimizing parameter values. ...
Eric Granger, Philippe Henniges, Luiz S. Oliveira,...
IDEAL
2007
Springer
14 years 6 months ago
Out of Bootstrap Estimation of Generalization Error Curves in Bagging Ensembles
The dependence of the classification error on the size of a bagging ensemble can be modeled within the framework of Monte Carlo theory for ensemble learning. These error curves ar...
Daniel Hernández-Lobato, Gonzalo Mart&iacut...
COLT
2007
Springer
14 years 6 months ago
Prediction by Categorical Features: Generalization Properties and Application to Feature Ranking
We describe and analyze a new approach for feature ranking in the presence of categorical features with a large number of possible values. It is shown that popular ranking criteria...
Sivan Sabato, Shai Shalev-Shwartz
IJCNN
2007
IEEE
14 years 6 months ago
Evaluation of Performance Measures for SVR Hyperparameter Selection
— To obtain accurate modeling results, it is of primal importance to find optimal values for the hyperparameters in the Support Vector Regression (SVR) model. In general, we sea...
Koen Smets, Brigitte Verdonk, Elsa Jordaan
ICML
2001
IEEE
15 years 20 days ago
Some Theoretical Aspects of Boosting in the Presence of Noisy Data
This is a survey of some theoretical results on boosting obtained from an analogous treatment of some regression and classi cation boosting algorithms. Some related papers include...
Wenxin Jiang
ICML
2003
IEEE
15 years 20 days ago
The Set Covering Machine with Data-Dependent Half-Spaces
We examine the set covering machine when it uses data-dependent half-spaces for its set of features and bound its generalization error in terms of the number of training errors an...
Mario Marchand, Mohak Shah, John Shawe-Taylor, Mar...
ICPR
2000
IEEE
15 years 28 days ago
General Bias/Variance Decomposition with Target Independent Variance of Error Functions Derived from the Exponential Family of D
An important theoretical tool in machine learning is the bias/variance decomposition of the generalization error. It was introduced for the mean square error in [3]. The bias/vari...
Jakob Vogdrup Hansen, Tom Heskes