Sciweavers

TSP
2008

Multi-Task Learning for Analyzing and Sorting Large Databases of Sequential Data

13 years 10 months ago
Multi-Task Learning for Analyzing and Sorting Large Databases of Sequential Data
A new hierarchical nonparametric Bayesian framework is proposed for the problem of multi-task learning (MTL) with sequential data. The models for multiple tasks, each characterized by sequential data, are learned jointly, and the inter-task relationships are obtained simultaneously. This MTL setting is used to analyze and sort large databases composed of sequential data, such as music clips. Within each data set, we represent the sequential data with an infinite hidden Markov model (iHMM), avoiding the problem of model selection (selecting a number of states). Across the data sets, the multiple iHMMs are learned jointly in a MTL setting, employing a nested Dirichlet process (nDP). The nDP-iHMM MTL method allows simultaneous task-level and data-level clustering, with which the individual iHMMs are enhanced and the between-task similarities are learned. Therefore, in addition to improved learning of each of the models via appropriate data sharing, the learned sharing mechanisms are used...
Kai Ni, John William Paisley, Lawrence Carin, Davi
Added 28 Jan 2011
Updated 28 Jan 2011
Type Journal
Year 2008
Where TSP
Authors Kai Ni, John William Paisley, Lawrence Carin, David B. Dunson
Comments (0)