Sciweavers

107 search results - page 4 / 22
» General Loss Bounds for Universal Sequence Prediction
Sort
View
COLT
1999
Springer
13 years 11 months ago
Regret Bounds for Prediction Problems
We present a unified framework for reasoning about worst-case regret bounds for learning algorithms. This framework is based on the theory of duality of convex functions. It brin...
Geoffrey J. Gordon
NIPS
2001
13 years 8 months ago
On the Generalization Ability of On-Line Learning Algorithms
In this paper, it is shown how to extract a hypothesis with small risk from the ensemble of hypotheses generated by an arbitrary on-line learning algorithm run on an independent an...
Nicolò Cesa-Bianchi, Alex Conconi, Claudio ...
NIPS
2008
13 years 8 months ago
Tighter Bounds for Structured Estimation
Large-margin structured estimation methods minimize a convex upper bound of loss functions. While they allow for efficient optimization algorithms, these convex formulations are n...
Olivier Chapelle, Chuong B. Do, Quoc V. Le, Alexan...
JCSS
2006
77views more  JCSS 2006»
13 years 7 months ago
Sequential predictions based on algorithmic complexity
This paper studies sequence prediction based on the monotone Kolmogorov complexity Km=-log m, i.e. based on universal deterministic/one-part MDL. m is extremely close to Solomonof...
Marcus Hutter
COLT
2004
Springer
14 years 22 days ago
Convergence of Discrete MDL for Sequential Prediction
We study the properties of the Minimum Description Length principle for sequence prediction, considering a two-part MDL estimator which is chosen from a countable class of models....
Jan Poland, Marcus Hutter