The success of tensor-based subspace learning depends heavily on reducing correlations along the column vectors of the mode-k flattened matrix. In this work, we study the problem ...
Shuicheng Yan, Dong Xu, Stephen Lin, Thomas S. Hua...
We show that, given data from a mixture of k well-separated spherical Gaussians in Rd, a simple two-round variant of EM will, with high probability, learn the parameters of the Ga...
PCA-SIFT is an extension to SIFT which aims to reduce SIFT’s high dimensionality (128 dimensions) by applying PCA to the gradient image patches. However PCA is not a discriminati...
Abstract. In supervised learning, discretization of the continuous explanatory attributes enhances the accuracy of decision tree induction algorithms and naive Bayes classifier. M...
Genetic Programming offers freedom in the definition of the cost function that is unparalleled among supervised learning algorithms. However, this freedom goes largely unexploited...