Sciweavers

CVPR
2001
IEEE

Bayesian Learning of Sparse Classifiers

15 years 2 months ago
Bayesian Learning of Sparse Classifiers
Bayesian approaches to supervised learning use priors on the classifier parameters. However, few priors aim at achieving "sparse" classifiers, where irrelevant/redundant parameters are automatically set to zero. Two well-known ways of obtaining sparse classifiers are: use a zero-mean Laplacian prior on the parameters, and the "support vector machine" (SVM). Whether one uses a Laplacian prior or an SVM, one still needs to specify/estimate the parameters that control the degree of sparseness of the resulting classifiers. We propose a Bayesian approach to learning sparse classifiers which does not involve any parameters controlling the degree of sparseness. This is achieved by a hierarchicalBayes interpretation of the Laplacian prior, followed by the adoption of a Jeffreys' non-informative hyper-prior. Implementation is carried out by an EM algorithm. Experimental evaluation of the proposed method shows that it performs competitively with (often better than) the ...
Anil K. Jain, Mário A. T. Figueiredo
Added 12 Oct 2009
Updated 12 Oct 2009
Type Conference
Year 2001
Where CVPR
Authors Anil K. Jain, Mário A. T. Figueiredo
Comments (0)