: The paper presents a new concept of computer user interface dedicated for the disabled people. The concept is based on the recognition of ultrasound image of the selected region of tongue surface. Several ultrasound images of tongue surface has been stored and analyzed from the point of view to steering usability. The representative regions has been selected and the tongue surface movement boundary has been determined. As the input standard medical ultrasound sensor has been used. The digital sequence of image frames has been stored in the computer. The position of tongue surface has been determined by the distance between the main echo and the ultra sound sensor. The tongue position signals will be used to speech synthesis control.