—We consider opportunistic communications over multiple channels where the state (“good” or “bad”) of each channel evolves as independent and identically distributed Markov processes. A user, with limited sensing and access capability, chooses one channel to sense and subsequently access (based on the sensed channel state) in each time slot. A reward is obtained when the user senses and accesses a “good” channel. The objective is to design the optimal channel selection policy that maximizes the expected reward accrued over time. This problem can be generally formulated as a Partially Observable Markov Decision Process (POMDP) or a restless multi-armed bandit process, to which optimal solutions are often intractable. We show in this paper that the myopic policy, with a simple and robust structure, achieves optimality under certain conditions. This result finds applications in opportunistic communications in fading environment, cognitive radio networks for spectrum overlay...