Traditional multiparty audio conferencing uses a star-shaped topology where all the clients connect to a central MCU (Multipoint Control Unit). The MCU mixes the signals from the speakers, encodes it, and sends back the encoded signal to each client. To prevent the speakers from hearing their own voices, the MCU has to produce and encode a different mixed signal for each speaker. As a result, the CPU load on the MCU increases proportionally to the number of speakers in the conference. In this paper, we introduce a new conferencing architecture, where the MCU produces a single encoded signal sum of all received signals and each client is responsible for removing its own signal if necessary. This architecture can substantially reduce CPU load on the MCU. The major challenge, however, is that the client’s original speech is non-linearly distorted by the MCU encoding process. Simply subtracting the original speech from the mixed signal would produce an echo-like distortion. We solve tha...
Junlin Li, Li-wei He, Dinei A. F. Florêncio