Dynamic Power Management (DPM) is a design methodology aiming at reducing power consumption of electronic systems by performing selective shutdown of idle system resources. The effectiveness of a power management scheme depends critically on an accurate modeling of service requests and on the computation of the control policy. In this work, we present an online adaptive DPM scheme for systems that can be modeled as finite-state Markov chains. Online adaptation is required to deal with initially unknown or nonstationary workloads, which are very common in real-life systems. Our approach moves from exact policy optimization techniques in a known and stationary stochastic environment and it extends optimum stationary control policies to handle the unknown and nonstationary stochastic environment for practical applications. We introduce two workload learning techniques based on sliding windows and we study their properties. Furthermore, a two-dimensional interpolation technique is introduc...