Ideally computer pattern recognition systems should be insensitive to scaling, translation, distortion and rotation. Many neural network models have been proposed to address this purpose. The Neocognitron is a multi-layered neural network model for pattern recognition introduced by Fukushima in the early 1980s. It was considered effective and, after supervised learning, it can recognise input patterns without being affected by distortion, change in size, or shift in position but it was not designed to handle rotated patterns. This paper examines the layers of the Neocognitron to emphasize their importance in Fukushima's theory. We also provide an additional filer layer to deal with rotated patterns. Key Words: Pattern recognition, character recognition, Neocognitron, neural network.