: This paper presents a Least Popularly Used buffer cache algorithm to exploit both temporal locality and content locality of I/O requests. Popular data blocks are selected as reference blocks that are not only accessed frequently but also identical or similar in content to other blocks that are being accessed. Fast delta compression and decompression are used to satisfy as many I/O requests as possible using the popular reference blocks together with small deltas inside the buffer cache. The popularity of a reference block is calculated based on the statistical analysis of data contents and access frequency. A prototype LPU has been implemented as a new cache layer for Kernel Virtual Machine (KVM) on Linux system. Experimental results show LPU is effective for a variety of workloads with the maximum speed up of over 300% compared with LRU.