Bayesian estimation in nonlinear stochastic dynamical systems has been addressed for a long time. Among other solutions, Particle Filtering (PF) algorithms propagate in time a Monte Carlo (MC) approximation of the a posteriori filtering measure. However, a drawback of the classical PF algorithms is that the optimal conditional importance distribution (CID) is often difficult (or even impossible) to compute and to sample from. As a consequence, suboptimal sampling strategies have been proposed in the literature. In this paper we bypass this difficulty by rather considering the prediction sequential importance sampling (SIS) problem; the filtering MC approximation is obtained as a byproduct. The advantage of this prediction-PF method is that it combines optimality and simplicity, since for the prediction problem, the optimal CID happens to be the prior transition of the underlying Markov Chain (MC), from which it is often simple to sample from.