We propose a clustering algorithm that effectively utilizes feature order preferences, which have the form that feature s is more important than feature t. Our clustering formulation aims to incorporate feature order preferences into prototype-based clustering. The derived algorithm automatically learns distortion measures parameterized by feature weights which will respect the feature order preferences as much as possible. Our method allows the use of a broad range of distortion measures such as Bregman divergences. Moreover, even when generalized entropy is used in the regularization term, the subproblem of learning the feature weights is still a convex programming problem. Empirical results on some datasets demonstrate the effectiveness and potential of our method.