We present a method based on statistical properties of local image pixels for focussing attention on regions of text in arbitrary scenes where the text plane is not necessarily fronto-parallel to the camera. This is particularly useful for Desktop or Wearable Computing applications. The statistical measures are chosen to reveal charactersitic properties of text. We combine a number of localised measures using a neural network to classify each pixel as text or non-text. We demonstrate our results on typical images.