Identifiability becomes an essential requirement for learning machines when the models contain physically interpretable parameters. This paper presents two approaches to examining structural identifiability of the generalized constraint neural network (GCNN) models by viewing the model from two different perspectives. First, by taking the model as a static deterministic function, a functional framework is established, which can recognize deficient model and at the same time reparameterize it through a pairwise-mode symbolic examination. Second, by viewing the model as the mean function of an isotropic Gaussian conditional distribution, the algebraic approaches [E.A. Catchpole, B.J.T. Morgan, Detecting parameter redundancy, Biometrika 84 (1) (1997) 187