We propose a new neural network architecture, called Simple Recurrent Temporal-Difference Networks (SR-TDNs), that learns to predict future observations in partially observable environments. SR-TDNs incorporate the structure of simple recurrent neural networks (SRNs) into temporal-difference (TD) networks to use protopredictive representation of states. Although they deviate from the principle of predictive representations to ground state representations on observations, they follow the same learning strategy as TD networks, i.e., applying TDlearning to general predictions. Simulation experiments revealed that SR-TDNs can correctly represent states with an incomplete set of core tests (question networks), and consequently, SRTDNs have better on-line learning capacity than TD networks in various environments.