Voronoi condensing reduces training patterns of nearest neighbor classifiers without changing the classification boundaries. This method plays important roles not only in the nearest neighbor classifiers but also in the other classifiers such as the support vector machines, because the resulting prototype patterns involve support vectors in many cases. However, previous algorithms for Voronoi condensing were computationally inefficient in general Pattern Recognition tasks. This is because they use proximity graphs for entire training patters, which require computational time exponentially for the dimension of pattern space. For solving this problem, we proposed an efficient algorithm for Voronoi condensing named direct condensing that does not require the entire proximity graphs of training patterns. We confirmed that direct condensing efficiently calculates Voronoi condensed prototypes in high dimension (from 2 to 20 dimensions).