Home
/ Blog /
Measuring WebRTC Call Quality - Part 1September 9, 202410 min read
Share
High-quality video conferencing relies on participants being able to see and hear each other clearly, without any disruptions or distortions. The final call quality depends on various factors, including device capabilities and the quality of the network connection between the sender and receiver.
To evaluate the quality of a WebRTC call, it’s crucial to identify key metrics that offer insights into the call’s performance. These metrics reveal what’s happening behind the scenes, allowing us to diagnose and resolve issues effectively. Common problems like choppy audio, video pixelation, and sync issues can be traced by analyzing these metrics. By determining whether the issue originates from the sender’s or receiver’s side, we can take targeted steps to improve the call quality.
In this blog, we’ll explore critical metrics provided by the WebRTC getStats() API that help measure and debug call quality issues. We’ll first focus on the sender side, explaining how to interpret these metrics and offering practical examples to diagnose and improve call quality.
To start with, let's examine the metrics that impact the quality on the publisher's side. These metrics are found in the RTCOutboundRtpStreamStats
section of WebRTC's RTCStatsReport
, which is retrieved using the getStats()
API. For the most accurate data, it's recommended to call the getStats()
API every second.
frameWidth
and frameHeight
represent the width and height of the last encoded frame respectively. The video resolution, calculated by multiplying width and height, determines the clarity and detail of the video
frameWidth
: The width of the video frames in pixels.frameHeight
: The height of the video frames in pixelsHigher resolution (larger frame width and height) generally means better video quality but also requires more bandwidth and processing power.
Purpose of measuring frame width and height
frameWidth
and frameHeight
can help determine if the issue is related to low resolution.framesPerSecond
represents the number of video frames being processed per second.
Purpose of measuring framesPerSecond
bytesSent
metric represents the total number of bytes transmitted over a specific RTP stream, including both the payload (audio or video data) and the RTP packet headers.
Purpose of measuring bytesSent
bytesSent
over time.bytesSent
can help in diagnosing issues related to packet loss, network congestion, or other transmission problems.The upload bitrate or outgoing bitrate represents the rate at which data is transmitted from the local peer to the remote peer, typically measured in bits per second (bps). A stable bitrate is key to ensuring smooth audio/video playback and a consistent user experience.
Purpose of measuring outgoing bitrate
By tracking outgoing bitrate, developers can optimize network and computational resources, adjusting video resolution, frame rate, or codec settings based on the available bandwidth and current network conditions.
packetsLost
metric represents the cumulative count of RTP (Real-time Transport Protocol) packets that were expected to arrive at the receiver but did not. This count starts from the beginning of the RTP stream and includes all packets lost during the session.
Causes of Packet Loss
Effects of Packet Loss
Round Trip Time (RTT) / roundTripTime
is the total time for a data packet to travel from the sender to the receiver and back. It is a key metric for measuring latency in a WebRTC connection.
Higher RTT increases latency, causing noticeable delays in conversations, while lower RTT reduces latency, enabling more natural, real-time interactions.
Typical RTT values and associated experience:
Experience | RTT |
---|---|
Excellent | < 150ms |
Acceptable | 150ms - 300ms |
Problematic | > 300ms |
Jitter refers to the variation in time delay between RTP packets arriving. In an ideal network, packets would arrive at evenly spaced intervals. However, in real-world networks, various factors cause packets to experience different delays, leading to jitter.
Causes of Jitter
Effects of Jitter
For most real-time communication applications, jitter values can follow this :
Jitter | Range | Inference |
---|---|---|
Optimal | < 30ms | Values below 30 ms are generally considered excellent and indicate a stable network with minimal variation in packet delay. |
Acceptable | 30ms - 50ms | While some variation in packet arrival times is present, it is often manageable with jitter buffers and other compensatory mechanisms. |
Borderline | 50ms - 100ms | Values in this range might start causing noticeable issues, such as minor audio dropouts or video frame skips. Adjustments may be needed to maintain quality. |
Problematic | > 100ms | Generally a sign of serious network issues, values above 100 ms can lead to significant degradation in communication quality, including choppy audio, noticeable delays, and poor video synchronization. |
totalPacketSendDelay
metric represents the total time (in seconds) that packets have spent in the send buffer, waiting to be sent. It essentially measures the delay introduced at the sender side before the packets are transmitted over the network. This is present in outbound-rtp (kind = video/audio).
Purpose of measuring totalPacketSendDelay
totalPacketSendDelay
values can indicate network congestion or bottlenecks at the sender's end, where packets are queued up waiting to be sent.totalPacketSendDelay
helps diagnose and troubleshoot latency issues. If packets are delayed before even being sent, it contributes to the overall end-to-end latency experienced by the user.In an ideal scenario, totalPacketSendDelay
should be as low as possible.
availableOutgoingBitrate
represents the estimated maximum bitrate that can be sent on the current network path and provides insights into the network's capacity to handle outgoing media streams. This is calculated using the underlying congestion control (gcc) algorithm. Please not that this does not indicate the exact upload bandwidth of the network connection but an approximation based on the amount of bitrate it is uploading.
Purpose of measuring availableOutgoingBitrate
availableOutgoingBitrate
helps in understanding the current network capacity for outgoing media streams. It indicates how much data can be sent per second without causing congestion or packet loss.availableOutgoingBitrate
allows these algorithms to dynamically adjust the bitrate to match the available network capacity, ensuring smooth and high-quality media transmission.availableOutgoingBitrate
can signal network issues such as bandwidth fluctuations, congestion, or other problems. This information is valuable for diagnosing and troubleshooting network-related issues in real-time communication applications.If this value is lesser than the upload bitrate that is configured, then there would be publish side degradations observed , i.e , reduction in bitrate and/or fps being published
qualityLimitationDurations
is a metric that provides detailed information about the durations during which the quality of a media stream was limited due to various reasons. This metric is part of the RTCOutboundRtpStreamStats
dictionary and is particularly useful for diagnosing and understanding the factors affecting the quality of a media stream. This is present in candidate-pair(state = succeeded).
Components of qualityLimitationDurations
The qualityLimitationDurations
metric includes multiple sub-metrics that indicate the time spent under different types of quality limitations:
bandwidth
: Duration during which the quality was limited due to insufficient bandwidth.cpu
: Duration during which the quality was limited due to high CPU usage.other
: Duration during which the quality was limited due to other reasons that don't fall under bandwidth or CPU constraints.Purpose of measuring qualityLimitationDurations
To effectively troubleshoot publisher-side issues in a WebRTC call, it’s crucial to monitor the above key metrics. One approach is to capture metrics from the getStats() API at regular intervals (e.g., every second), then aggregate or average the data over a longer period (e.g., 30 seconds) before uploading it to the server for analysis. Now that we’ve some idea about the publisher side metrics, let’s see how we can use them to debug actual webRTC calls.
Executing a seamless video conferencing is not as straightforward as it might seem. Some issues are frequent and there is a set protocol to debugging those, but we also run into interesting problems and edge cases often. At 100ms, monitoring and addressing these metrics has become a routine part of our operations. To facilitate this process, most of the metrics mentioned earlier are directly available to customers through the 100ms Dashboard. We’ve built a powerful call quality dashboard where these statistics are plotted as time series graphs, making it easier for users to diagnose and resolve issues quickly. This data is presented at both the session and participant levels, allowing for granular insights into call performance.
We’ll take a small case study to understand how we usually debug real life call quality and experiential issues. The following snapshots are from the 100ms Peer Insights Dashboard, showing data for a participant who encountered degraded publishing quality.
There was a problem where the video being seen by other participants from the participant A’s end wasn’t of good quality, which made it hard for them to view the participant A clearly.
We begin by examining the publishing bitrate graph, which indicates a significant drop in the bitrate being published around 06:18 HRS. Simultaneously, the FPS (frames per second) values also show a noticeable decline during this period.
This drop suggests that the video quality being transmitted was compromised at that time. As a result, the receivers / viewers likely experienced pixelated video due to the low bitrate and encountered choppy playback because of the reduced FPS.
This aligns with the observations shared by others participants. We’ll have to ask couple of what and why questions to understand the root cause.
To diagnose the issue further, we examine the network metrics. Key observations are as follows:
As a result, the video publisher had to reduce both bitrate and FPS to adapt to the sudden network degradation, leading to a compromised publishing as well as viewer experience. This tells us that the culprit in question is the network, but this begs another question:
The network degradation appears to have been caused by congestion in the upload network. This is indicated by the jitter metric, which shows significant deviation from its normal values around the same timestamp (06:18 HRS). The increased jitter suggests that packets were arriving at the receiver end in an unevenly spaced manner, leading to packet loss.
These network issues, evidenced by the sudden spikes in jitter, RTT, and packet loss, likely caused the upload quality to deteriorate, resulting in the observed drops in bitrate and FPS. This combination of factors indicates that the peer was experiencing substantial network instability, which directly impacted the quality of the video being published.
While sudden and significant network drop-offs are often beyond our control, we can optimize the overall experience on consistently poor networks by adjusting parameters such as available bitrate, resolution, and FPS. These adjustments help maintain consistent, if not great video quality under challenging network conditions.
While we’ve focused on sender-side metrics in this blog, it’s important to remember that the receiving side plays an equally crucial role in determining the final call quality.
In our next blog, we will dive into the key metrics on the receiving / subscriber side. We will explore how these stats impact call quality and work towards developing a formula to qualitatively measure video quality at the receiver end. Understanding both sides will enable us to provide a more comprehensive approach to optimizing video call experiences.
To explore and experience these metrics firsthand, log in to the 100ms dashboard and conduct a demo call to generate real-time data. You can learn more about Peer Insights by visiting this link.
Video
Share
Related articles
See all articles