Two recently developed methods for extraction of crisp logical rules from neural networks trained with backpropagation algorithm are compared. Both methods impose constraints on the structure of the network by adding regularization terms to the error function. Networks with minimal number of connections are created, leading to a small number of crisp logical rules. The two methods are compared on the Iris and mushroom classification problems, generating the simplest logical description of this data published so far.