In this work we extend the work of Dean, Kaelbling, Kirman and Nicholson on planning under time constraints in stochastic domains to handle more complicated scheduling problems. In scheduling problems the sources of complexity stem not only from large state spaces but from large action spaces as well. In these problems it is no longer tractable to compute optimal policies for restricted state spaces via policy iteration. We, instead, borrow from Operations Research in applying bottleneck-centered scheduling heuristics to improve initial policies and make use of Monte Carlo simulation for selectively constructing partial policies in large state spaces. Additionally, we employ a variant of Drummond's situated control rules to constrain the space of possible actions.