The article compares two approaches to the description of ultrasound vocal tract images for application in a “silent speech interface,” one based on tongue contour modeling, and a second, global coding approach in which images are projected onto a feature space of Eigentongues. A curvaturebased lip profile feature extraction method is also presented. Extracted visual features are input to a neural network which learns the relation between the vocal tract configuration and line spectrum frequencies (LSF) contained in a one-hour speech corpus. An examination of the quality of LSF’s derived from the two approaches demonstrates that the eigentongues approach has a more efficient implementation and provides superior results based on a normalized mean squared error criterion.