One of the primary advantages of artificial neural networks is their inherent ability to perform massively parallel, nonlinear signal processing. However, the asynchronous dynamics underlying the evolution of such networks may often lead to the emergence of computational chaos, which impedes the efficient retrieval of information usually stored in the system’s attractors. In this paper, we discuss the implications of chaos in concurrent asynchronous computation, and provide a methodology that prevents its emergence. Our results are illustrated on a widely used neural network model.