We incorporate auditory-based features into an unconventional pattern classification system, consisting of a network of spiking neurones with dynamical and multiplicative synapses. Although the network does not need any training and is autonomous, the analysis is dynamic and capable of extracting multiple features and maps. The neural network allows computing a binary mask that acts as a dynamic switch on a speech vocoder made of an FIR gammatone analysis/synthesis bank of 256 filters. We report experiments on separation of speech from various intruding sounds (siren, telephone bell, speech, etc.) and compare our approach to other techniques by using the Log Spectral Distortion (LSD) metric. Key words: amplitude modulation, auditory scene analysis, auditory maps, source separation, speech enhancement, spikes, neurones