Most machine learning algorithms share the following drawback: they only output bare predictions but not the con dence in those predictions. In the 1960s algorithmic information theory supplied universal measures of con dence but these are, unfortunately, non-computable. In this paper we combine the ideas of algorithmic information theory with the theory of Support Vector machines to obtain practicable approximations to universal measures of con dence. We show that in some standard problems of pattern recognition our approximations work well.