High-level controllers that operate robots in dynamic, uncertain domains are concerned with at least two reasoning tasks dealing with the effects of noisy sensors and effectors: They have a) to project the effects of a candidate plan and b) to update their beliefs during online execution of a plan. In this paper, we show how the pGOLOG framework, which in its original form only accounted for the projection of high-level plans, can be extended to reason about the way the robot’s beliefs evolve during the on-line execution of a plan. pGOLOG, an extension of the high-level programming language GOLOG, allows the specification of probabilistic beliefs about the state of the world and the representation of sensors and effectors which have uncertain, probabilistic outcomes. As an application of belief update, we introduce belief-based programs, GOLOG-style programs whose tests appeal to the agent’s beliefs at execution time.