This paper explores internal representation power of product units [1] that act as the functional nodes in the hidden layer of a multi-layer feedforward network. Interesting properties from using binary input provide an insight into the superior computational power of the product unit. Using binary computation problems of symmetry and parity as illustrative examples, we show that learning arbitrary complex internal representations is more achievable with product units than with traditional summing units. Key words: product unit, internal representations, recurrent neural networks, perceptrons, backpropagation training.