This paper introduces a new efficient way for computing affine invariant features from gray-scale images. The method is based on a novel image transform which produces infinitely many different invariants, and is applicable directly to isolated image patches without further segmentation. Among methods in this class only the affine invariant moments have as low complexity as our method, but as known they also possess many considerable weaknesses, including sensitivity to noise and occlusions. According to performed experiments it turns out that our novel method is more robust against these nonaffine distortions observed in image acquisition process, and even in practice its computation time is equivalent to that of the affine invariant moments. It is also observed that already a small subset of these new features is enough for successful classification.