Recently a new algorithm for reverse engineering of biochemical networks was developed by Laubenbacher and Stigler. It is based on methods from computational algebra and finds most parsimonious models for a given data set. We derive mathematically rigorous estimates for the expected amount of data needed by this algorithm to find the correct model. In particular, we demonstrate that for one type of input parameter (graded term orders), the expected data requirements scale polynomially with the number n of chemicals in the network, while for another type of input parameters (randomly chosen lex orders) this number scales exponentially in n. We also show that for a modification of the algorithm, the expected data requirements scale as the logarithm of n.