Achieve Low Latency with M-ABR and Chunked CMAF
Posted by JT Consulting on Jun 26, 2019In a paper I wrote earlier this year, I addressed a dilemma facing OTT video service providers. Namely: Demand for live streaming is growing, yet so is aversion to “live” video compromised by high latency.
The sponsors of this paper were video delivery solutions provider Broadpeak and THEO Technologies, developer of a universal video player. The technologies they recommend can dramatically reduce streaming video delays from half a minute to under 4 seconds. The components include multicast adaptive bit rate (M-ABR) streaming; Common Media Application Format (CMAF), low-latency mode; HTTP 1.1’s Chunked Transfer Encoding (CTE) mechanism; and optimized video players.
Latency and Multicast-ABR
A root cause of this problem is best-effort HTTP traffic. Across unmanaged networks, video play-out is consistently interrupted for re-buffering. What M-ABR does is transform that series of irregular, unicast bandwidth peaks into a relatively jitter-free, smoothed and prioritized traffic flow.
Pioneered by Broadpeak, M-ABR requires a transcasting device to convert unicast into multicast, and a multicast-to-unicast agent embedded within a home gateway or set-top box. A protocol built on top of and adapted to simplify the NACK-Oriented Reliable Multicast (NORM) standard drives these conversions. M-ABR leverages two transport-layer technologies: the more persistent but loosely managed User Data Protocol (UDP) and more tightly controlled Transport Control Protocol (TCP).
As a result, performance is substantially boosted. Enabling 2-second segments and 1-second buffering, M-ABR can reduce overall delay by almost 75 percent, to 7 seconds.
CMAF, CTE and Optimized Playback
CMAF achieved a significant degree of unification between two separate media formats: Dynamic Adaptive Streaming over HTTP (DASH) and HTTP Live Streaming (HLS). (DASH has had a low-latency mode for more than year; much-anticipated, Apple recently introduced that feature in HLS.) In its low-latency mode, CMAF allows you to split fragments into small chunks, consisting of a header and media samples. When combined with CTE, a streaming data transfer mechanism found in HTTP 1.1, CMAF positions the stream for lower latency.
What’s finally needed is a player that can configure latency correctly from the start, learn how to estimate bandwidth and maintain a correct minimum buffer size. Players especially need awareness of network jitter. Having too small a buffer could cause underflow. Having a buffer which is too big would introduce unwanted latency. Players unable to modify their behavior appropriately will have a significant impact on the user’s QoE.
The upshot is that chunked CMAF and CTE, with the right playback technology, can eliminate another 4.5 seconds. That reduces total latency from 30 seconds to close to 3. For more details, request the full paper on this Broadpeak site.