Paper [1] aimed at providing a unified presentation of neural network architectures. We show in the present comment (i) that the canonical form of recurrent neural networks presented by Nerrand et al. [2] many years ago provides the desired unification, (ii) that what Tsoi and Back call Nerrand's canonical form is not the canonical form presented by Nerrand et al. in [2], and that (iii) contrarily to the claim of Tsoi and Back, all neural network architectures presented in their paper can be tranformed into Nerrand's canonical form. We show that the contents of Tsoi and Back's paper obscures the issues involved in the choice of a recurrent neural network instead of clarifying them: this choice is definitely much simpler than it might seem from Tsoi and Back's paper. In [1], Tsoi and Back present a number of different discrete-time recurrent neural network architectures and intend to clarify the links between them. The authors must be commended for trying to perform...