This paper proposes an initialization of back propagation (BP) networks for pattern classification problems; the weights of hidden units are initialized so that hyperplanes should pass through the center of input pattern set, and those of the output layer are initialized to zero. Several simulation results confirm that the proposed initialization gives better convergence than the ordinary initialization that all the weights are initialized by uniform random values with zero mean.