The statistical properties of the likelihood ratio test statistic (LRTS) for mixture-of-expert models are addressed in this paper. This question is essential when estimating the number of experts in the model. Our purpose is to extend the existing results for mixtures (Liu and Shao, 2003) and mixtures of multilayer perceptrons (Olteanu and Rynkiewicz, 2008). In this paper we study a simple example which embodies all the difficulties arising in such models. We find that in some cases the LRTS diverges but, with additional assumptions, the behavior of such models can be totally explicated.