Sciweavers

37 search results - page 3 / 8
» Exploring Parallelism in Learning Belief Networks
Sort
View
JMLR
2010
145views more  JMLR 2010»
13 years 1 months ago
Parallelizable Sampling of Markov Random Fields
Markov Random Fields (MRFs) are an important class of probabilistic models which are used for density estimation, classification, denoising, and for constructing Deep Belief Netwo...
James Martens, Ilya Sutskever
CHI
2011
ACM
12 years 10 months ago
Apolo: making sense of large network data by combining rich user interaction and machine learning
Extracting useful knowledge from large network datasets has become a fundamental challenge in many domains, from scientific literature to social networks and the web. We introduc...
Duen Horng Chau, Aniket Kittur, Jason I. Hong, Chr...
JMLR
2010
139views more  JMLR 2010»
13 years 1 months ago
Tempered Markov Chain Monte Carlo for training of Restricted Boltzmann Machines
Alternating Gibbs sampling is the most common scheme used for sampling from Restricted Boltzmann Machines (RBM), a crucial component in deep architectures such as Deep Belief Netw...
Guillaume Desjardins, Aaron C. Courville, Yoshua B...
IJCNN
2007
IEEE
14 years 1 months ago
Parallel Learning of Large Fuzzy Cognitive Maps
— Fuzzy Cognitive Maps (FCMs) are a class of discrete-time Artificial Neural Networks that are used to model dynamic systems. A recently introduced supervised learning method, wh...
Wojciech Stach, Lukasz A. Kurgan, Witold Pedrycz
ICML
2010
IEEE
13 years 8 months ago
Deep networks for robust visual recognition
Deep Belief Networks (DBNs) are hierarchical generative models which have been used successfully to model high dimensional visual data. However, they are not robust to common vari...
Yichuan Tang, Chris Eliasmith