The automated analysis of Feature Models (FMs) focuses on the usage of different logic paradigms and solvers to implement a number of analysis operations on FMs. The implementation of these operations using a specific solver is an error-prone and time-consuming task. To improve this situation, we propose to design a generic set of test cases to verify the functionality and correctness of the tools for the automated analysis of FMs. These test cases would help to improve the reliability of the existing tools while reducing the time needed to develop new ones. As a starting point, in this position paper we overview some of the classifications of software testing methods reported in the literature and study the adequacy of each approach to the context of our proposal.