As a key part of universal source coding, context quantization is very important for improving compression performance. However, in most existing methods, the quantizer is trained offline and is fixed due to the complexity of finding a good quantizer and the significant overhead of representing the quantizer. This paper proposes a novel online context quantization approach that achieves high coding efficiency with low quantizer overhead and computational complexity. It first partitions the context into groups according to the number of significant context events. A layer-based context quantization is then applied on these groups. The proposed method is applied for embedded wavelet image coding. Compared with the JPEG2000 coder, up to 0.6 dB improvements can be achieved on the standard 512 ? 512 test images. And more improvements are observed on images at lower resolutions.