—Embedded speaker recognition in mobile devices could involve several ergonomic constraints and a limited amount of computing resources. Even if they have proved their efficiency in more classical contexts, GMM/UBM based systems show their limits in such situations, with good accuracy demanding a relatively large quantity of speech data, but with negligible harnessing of linguistic content. The proposed approach addresses these limitations and takes advantage of the linguistic nature of the speech material into the GMM/UBM framework by using clientcustomised utterances. Furthermore, the acoustic structure is then reinforced with video information. Experiments on the MyIdea database are performed when impostors know the client utterance and also when they do not, highlighting the potential of this new approach. A relative gain up to 47% in terms of EER is achieved when impostors do not know the client utterance and performance is equivalent to the GMM/UBM baseline system in other con...