We consider the supervised learning of a binary classifier from noisy observations. We use smooth boosting to linearly combine abstaining hypotheses, each of which maps a subcube of the attribute space to one of the two classes. We introduce a new branch-and-bound weak learner to maximize the agreement rate of each hypothesis. Dobkin et al. give an algorithm for maximizing agreement with real-valued attributes [9]. Our algorithm improves on the time complexity of Dobkin et al.’s as long as the data can be binarized so that the number of binary attributes is o(log of the number of observations × number of real-valued attributes). Furthermore, we have fine-tuned our branch-and-bound algorithm with a queuing discipline and optimality gap to make it fast in practice. Finally, since logical patterns in Hammer et al.’s Logical Analysis of Data (LAD) framework [8, 6] are equivalent to abstaining monomial hypotheses, any boosting algorithm can be combined with our proposed weak learner...