Efficient loss recovery for videoconferencing via streaming codes and machine learning
Packet loss degrades the quality of experience (QoE) of live communication. The standard approach to recovering lost packets for long-distance communication is forward error correction (FEC). Conventional methods for FEC for real-time applications are highly inefficient at protecting against bursts of losses. Yet such bursts frequently arise in practice. Bursts can be tamed with less redundancy via a new class of theoretical FEC schemes, called “streaming codes,” designed to communicate a sequence of frames over a bursty packet loss channel in real-time. Existing streaming codes apply when all frames are of the same fixed size. However, many applications, including videoconferencing, involve sending compressed frames whose sizes fluctuate dramatically. This thesis presents a generalized model for streaming codes that incorporates frames of variable sizes, studies the fundamental limits on the optimal rate for the new model, designs new high-rate streaming codes using machine learning and coding theory, and integrates streaming codes into a videoconferencing application to assess their positive impact on the QoE.
To start, we examine the fundamental limits on the rate for “offline” communication, wherein the sizes of all future frames are known. We show that the variability in the sizes of frames (a) induces a new trade-off between the rate and the decoding delay under lossless transmission and (b) impacts the optimal rate of transmission. We then design rate-optimal streaming codes for the practically relevant “online” setting (i.e., where the sizes of the future frames are unknown) for two broad parameter regimes. We show that online schemes cannot match the optimal rate of offline schemes for all remaining parameter regimes because the optimal way to spread the data over multiple transmissions to alleviate the variability depends on the future arrival pattern. To address this shortcoming, we combine algebraic coding techniques with a learning-augmented algorithm for spreading frame symbols to design the first approximately rate-optimal streaming codes for a range of important parameter regimes for practical applications.
However, many real-world applications experience what we dub “partial bursts” losses of only some packets per frame, unlike the existing model, which assumes all or no packets are lost for each frame. To address this gap, we introduce a new streaming-codes-based approach to videoconferencing called Tambur. When as?sessed over emulated networks, Tambur improves several key metrics of QoE compared to conventional methods (e.g., it reduces the median frequency of freezes by 26%). We then extend the theoretical streaming model to accommodate partial bursts and design an online approximately rate-optimal streaming code. The code combines (a) a building block construction given any choice of how much redundancy to allocate per frame with (b) a learning-augmented algorithm to allocate redundancy per frame
History
Date
2023-05-13Degree Type
- Dissertation
Department
- Computer Science
Degree Name
- Doctor of Philosophy (PhD)