General dissimilarity-based learning approaches have been proposed for dissimilarity data sets [11, 10]. They arise in problems in which direct comparisons of objects are made, e.g. by computing pairwise distances between images, spectra, graphs or strings. In this paper, we study under which circumstances such dissimilarity-based techniques can be used for deriving classifiers in feature vector spaces. We will show that such classifiers perform comparably or better than the nearest neighbor rule based either on the entire or condensed training set. Moreover, they can be beneficial for highlyoverlapping classes and for non-normally distributed data sets, with categorical, mixed or otherwise difficult features.
Elzbieta Pekalska, Robert P. W. Duin