This paper presents a non-parallel training algorithm for voice conversion based on feature transform Gaussian mixture model (FTGMM), which is a mixture model of joint density space of source speaker and target speaker with explicit feature transform modeling. In FT-GMM, the correlations between the distributions of two speakers in each component of the mixture model are not directly modeled, but absorbed into these explicit feature transformations. This makes it possible to extend this model to non-parallel training by simply decomposing it into two sub-models, one for each speaker and optimizing them separatively. A frequency warping process is adopted to compensate performance degradation caused by original spectral distance between source and target speakers. Cross-gender experimental results show that the proposed method achieves comparable performance as parallel training.