We present a proposal for an Automatic Speech Recognizer based on a “multigranular” model. The leading hypothesis is that speech signal contains information distributed on more different time scales. Many works available in the recent literature from various scientific fields ranging from neurobiology to speech technologies, seem to concord on this assumption. In a broad sense, it seems that speech recognition in human is optimal because of a partial parallelization process according to which the left-to-right stream of speech is captured in a multilevel grid in which several linguistic analyses take place contemporarily. Our investigation aims, in this view, to apply these new ideas to the project of more robust and efficient recognizers