Simultaneous Multithreaded (SMT) processors improve the instruction throughput by allowing fetching and running instructions from several threads simultaneously at a single cycle. With the number of running threads increasing, the performance of single thread degrades continually. One main reason of that is the competition of the limited cache resources among the threads and then causing the number of cache misses of the single thread increasing dramatically. In this paper, we propose an effective cache overlapping storage structure for SMT processors. The key idea is using several bits to encode the value of a data. If the value is a frequent value, it can be read or written through the encoding scheme and a table of frequent value. All the frequent values are encoded in a determined way. Then all the bytes that store frequent values in a cache line can be used to store other values at the same positions of a different line that maps to the same physical hardware. We find that this ...