Sciweavers

83 search results - page 4 / 17
» Neural Network Learning: Testing Bounds on Sample Complexity
Sort
View
TSMC
2002
110views more  TSMC 2002»
13 years 7 months ago
Complexity reduction for "large image" processing
We present a method for sampling feature vectors in large (e.g., 2000 5000 16 bit) images that finds subsets of pixel locations which represent "regions" in the image. Sa...
Nikhil R. Pal, James C. Bezdek
BMCBI
2006
94views more  BMCBI 2006»
13 years 7 months ago
Noise-injected neural networks show promise for use on small-sample expression data
Background: Overfitting the data is a salient issue for classifier design in small-sample settings. This is why selecting a classifier from a constrained family of classifiers, on...
Jianping Hua, James Lowey, Zixiang Xiong, Edward R...
IJCNN
2008
IEEE
14 years 1 months ago
Building meta-learning algorithms basing on search controlled by machine complexity
Abstract— Meta-learning helps us find solutions to computational intelligence (CI) challenges in automated way. Metalearning algorithm presented in this paper is universal and m...
Norbert Jankowski, Krzysztof Grabczewski
JMLR
2010
162views more  JMLR 2010»
13 years 2 months ago
A Surrogate Modeling and Adaptive Sampling Toolbox for Computer Based Design
An exceedingly large number of scientific and engineering fields are confronted with the need for computer simulations to study complex, real world phenomena or solve challenging ...
Dirk Gorissen, Ivo Couckuyt, Piet Demeester, Tom D...
IJON
2007
184views more  IJON 2007»
13 years 7 months ago
Convex incremental extreme learning machine
Unlike the conventional neural network theories and implementations, Huang et al. [Universal approximation using incremental constructive feedforward networks with random hidden n...
Guang-Bin Huang, Lei Chen