Efficient probability density function estimation is of primary interest in statistics. A popular approach for achieving this is the use of finite Gaussian mixture models. Based on the expectation-maximization algorithm, the maximum likelihood estimates of the model parameters can be iteratively computed in an elegant way. Unfortunately, in some cases the algorithm is not converging properly because of numerical difficulties. They are of two kinds: they can be associated to outliers or to repeated data samples. In this paper, we trace and discuss their origin while providing some theoretical evidence. As a matter of fact, both can be explained by the concept of isolation, which is leading to the width of the collapsing mixture component to become zero.