Abstract. Recently, a new method intended to realize conformal mappings has been published. Called Locally Linear Embedding (LLE), this method can map high-dimensional data lying on a manifold to a representation of lower dimensionality that preserves the angles. Although LLE is claimed to solve problems that are usually managed by neural networks like Kohonen’s Self-Organizing Maps (SOMs), the method reduces to an elegant eigenproblem with desirable properties (no parameter tuning, no local minima, etc.). The purpose of this paper consists in comparing the capabilities of LLE with a newly developed neural method called Isotop and based on ideas like neighborhood preservation, which has been the key of the SOMs’ success. To illustrate the differences between the algebraic and the neural approach, LLE and Isotop are first briefly described and then compared with well known dimensionality reduction problems.