The typical difficulty of various NP-hard problems varies with simple parameters describing their structure. This behavior is largely independent of the search algorithm, but depends on the choice of problem ensemble. A given problem instance belongs to many different ensembles, so applying these observations to individual problems requires identifying which ensemble is most appropriate for predicting its search behavior, e.g., cost or solubility. To address this issue, we introduce a readily computable measure of randomness for search problems called "approximate entropy". This new measure is better suited to search than other approaches, such as algorithmic complexity and information entropy. Experiments with graph coloring and 3