In this paper we study the online learning problem involving rested and restless multiarmed bandits with multiple plays. The system consists of a single player/user and a set of K finite-state discrete-time Markov chains (arms) with unknown state spaces and statistics. At each time step the player can play M, M ≤ K, arms. The objective of the user is to decide for each step which M of the K arms to play over a sequence of trials so as to maximize its long term reward. The restless multiarmed bandit is particularly relevant to the application of opportunistic spectrum access (OSA), where a (secondary) user has access to a set of K channels, each of time-varying condition as a result of random fading and/or certain primary users’ activities. We first show that a logarithmic regret algorithm exists for the rested multiarmed bandit problem. We then construct an algorithm for the restless bandit problem which utilizes regenerative cycles of a Markov chain and computes a sample mean b...