Current trends in processor design are pointing to deeper and wider pipelines and superscalar architectures. The efficient use of these resources requires speculative execution, a technique whereby the processor continues executing the predicted path of a branch before the branch condition is resolved. In this paper, we investigate the implications of speculative execution on instruction cache performance. We explore policies for managinginstruction cachemisses ranging from aggressivepolicies (always fetch on the speculative path) to conservative ones (wait until branches are resolved). We test these policies and their interaction with next-line prefetching by simulating the effects on instruction caches with varying architectural parameters. Our results suggest that anaggressivepolicy combinedwith next-line prefetching is best for small latencies while more conservative policies are preferable for large latencies.