A new method for scale-aware saliency detection is introduced in this work. Scale determination is realized through a fast scale-space algorithm using color and texture. Scale information is fed back to a Discriminant Saliency engine by automatically tuning centersurround parameters. Excellent results are demonstrated for predicted fixations using a public database of measured human fixations. Further evidence of the proposed algorithm’s performance is exhibited through an application to Frame Rate Up-Conversion (FRUC). The ability of the algorithm to detect salient objects at multiple scales allows for class-leading performance both objectively in terms of PSNR/SSIM as well as subjectively. Finally, the need for operator tuning of saliency parameters is dramatically reduced by the inclusion of scale information. The proposed method is well-suited for any application requiring automatic saliency determination for images or video.
Natan Jacobson, Truong Q. Nguyen