Recent studies on program execution behavior reveal that a large amount of execution time is spent in small frequently executed regions of code. Whereas adaptive cache management systems focus on allocating cache resources based on execution access patterns, this paper presents a method of using compiler analysis to manage critical processor resources. With the addition of new architecture techniques to direct the utilization of instruction and data cache resources, the compiler can guard the most active regions of program execution from cache contention issues. The effect is that the overall performance of programs can be improved by either selectively granting each dynamic region a priority level for using cache and memory resources or providing active regions with dedicated cache structures.