Despite voluminous previous research on adaptive compression, we found significant challenges when attempting to fully utilize both network bandwidth and CPU. We describe the Fine-Grain (FG) Mixing strategy that compresses and sends as much data as possible, and then uses any remaining bandwidth to send uncompressed packets. Experimental measurements show that FG Mixing achieves significant gains in effective throughput, particularly at higher network bandwidths. However, non-trivial interactions between system components and layers (e.g., compression algorithms and middleware settings such as block size and buffer size) have significant impact on the overall system performance. Finally, the trade-offs and performance profiles of FG Mixing are measured, observed, and found to be consistent over a wide range of combinations of compression algorithms (GZIP, LZO, BZIP2), workload compression ratios (from 1 to 4), and network bandwidth (from 0 to 400 Mbps).