Caches play an important role in embedded systems by bridging the performance gap between high speed processors and slow memory. At the same time, caches introduce imprecision in Worst-case Execution Time (WCET) estimation due to unpredictable access latencies. Modern embedded processors often include cache locking mechanism for better timing predictability. As the cache contents are statically known, memory access latencies are predictable leading to precise WCET estimates. Moreover, by carefully selecting the memory blocks to be locked, WCET estimate can be reduced compared to cache modeling without locking. Existing static instruction cache locking techniques strive to lock the entire cache to minimize the WCET. We observe that such aggressive locking mechanisms may have negative impact on the overall WCET as some memory blocks with predictable access behavior get excluded from the cache. We introduce a partial cache locking mechanism that has the flexibility to lock only a fracti...