This paper presents a new acoustic-to-articulatory inversion methodbased on an episodic memory, which is an interesting model for two reasons. First, it does not rely on any assumptions about the mapping function but rather it relies on real synchronized acoustic and articulatory data streams. Second, the memory structurally embeds the naturalness of the articulatory dynamics. In addition, we introduce the concept of generative episodic memory, which enables the production of unseen articulatory trajectories according to the acoustic signals to be inverted. The proposed memory is evaluated on the MOCHA corpus. The results show its effectiveness and are very encouraging since they are comparable to those of recently proposed methods.