We consider a class of restless multi-armed bandit problems that arises in multi-channel opportunistic communications, where channels are modeled as independent and stochastically identical GilbertElliot channels and channel state observations are subject to errors. We show that the myopic channel selection policy has a semi-universal structure that obviates the need to know the Markovian transition probabilities of the channel states. Based on this semi-universal structure, we establish closed-form lower and upper bounds on the maximum throughput (i.e., average reward) achieved by the myopic policy. Furthermore, we characterize the approximation factor of the myopic policy by considering a genie-aided system.