Home
/ Blog /
Enhanced live classroom experience at scale with the WebRTC-HLS stackJuly 31, 20236 min read
Share
Live classrooms have become the norm for learning ever since the pandemic. A teacher presents the class while students actively listen and ask questions. That is the usual story. However, it is not the case in countries like India where students from tier 2 and tier 3 cities face network issues. In this blog, we will talk about how 100ms solves this problem for EdTech large classrooms by using HLS and WebRTC technologies simultaneously.
When it comes to live classrooms, WebRTC is commonly considered the optimal solution. WebRTC is a real-time communication protocol that enables users to interact with audio-video capabilities while maintaining low latency. This is the same technology that powers one-on-one video calls and video conferences in apps like Google Meet, Skype,…
While WebRTC might seem like a no-brainer answer to EdTech live classrooms, it is not without its own flaws. There are mainly two cases where audio-video issues start to become visible:
Bad network: When the users on the call are experiencing a poor network connection, which is super common in countries like India.
High Scale: When there are too many users on the call.
In the context of live classrooms, scalability aside, the users on bad network recurrently face issues like blurred content on video, skipped or black frames, and choppy audio:
Blurred content on video - The text content and figures being shown during classes look blurred and the overall video quality is subpar.
Skipped or Black frames - Video looks stuck, fast forwards or goes black during the class suddenly.
Choppy Audio - Unclear or breaking audio.
There are apps like Zoom, that use a modified version of WebRTC. However, they still suffer from the same problems that come with WebRTC and, in some cases, a few extra ones.
Before we move on further, it’s essential to understand the two key factors to consider when selecting a technology for live video use cases: latency and audio/video quality.
Latency: Latency refers to the time it takes for media to travel from the source to the receiver. Low latency is essential for real-time communication, enabling seamless interaction between participants.
Audio/video quality: Video quality is determined by factors such as resolution, frame rate, and bitrate. Higher quality visuals contribute to a more engaging and immersive learning experience.
An ideal solution would offer both low latency and high video quality. However, in practice, it often involves a trade-off. Either prioritize low latency, sacrificing video quality to ensure real-time communication or focus on delivering high video quality but with increased latency.
WebRTC, designed for two-way communication, aims to minimize latency. However, this focus on low latency can result in compromises in video quality. Some examples of how WebRTC compromises on video quality include:
Packet loss and frame drops: Network limitations, such as limited bandwidth or high latency, can lead to packet loss and dropped frames. These issues can cause a degraded video experience, impacting the clarity and smoothness of the visuals.
Adaptive bitrate streaming challenges: WebRTC employs adaptive bitrate streaming to adjust video quality dynamically based on network conditions. However, the adaptive streaming algorithm may not respond quickly enough to network changes, leading to temporary compromises in video quality.
Reduced resolution and compression artifacts: In low-bandwidth situations, WebRTC may dynamically adjust the bitrate and compression settings of the video stream. This can lead to a reduction in video resolution and the introduction of compression artifacts which can lead to a compromise in the level of detail, affecting the overall quality of the video.
WebRTC as a protocol was not designed for scalability, but rather for peer-to-peer communication. It was later adopted to support multiple peers in a call. Therefore, scaling a WebRTC call is not an easy task and, after a certain point, it’s just a question of making further compromises.
In live streaming use cases where the users are only on the receiving end, the most popular technology is HLS. It is the same technology that powers Super Bowl and IPL match live streams at scale. Unlike WebRTC, this comes with a latency of ~8 seconds but offers the best video quality possible. The latency, however, can be further reduced through optimizations or by moving to LL-HLS at a later stage. HLS offers certain advantages over WebRTC in terms of video quality:
In the next section, we will explore how 100ms utilizes HLS and WebRTC technologies simultaneously to address the network issues students face in large EdTech classrooms, bringing the best of both worlds.
Let’s consider an elaborate example of a live classroom scenario with a tutor and a group of students. By default, the students are muted and can only interact through chat messages. If necessary, they can raise their hand, and once accepted by the tutor, they can come “on stage” and engage with audio and video enabled. When they have finished interacting, the tutor can make them go “off stage” like the rest of the students. There are three types of users in the call:
Tutor - The tutor who has admin access over the entire call, with audio and video capabilities.
Students - Students who can listen to the tutor and interact through chat messages, raise their hand, and so on.
On-stage participants - Students who have audio and video capabilities to interact with the tutor in near real-time.
The tutor joins in a WebRTC connection which is then used to live stream on HLS. By default, the students join as HLS viewers, resembling the experience of watching a class via YouTube Live stream in Low latency mode. The only difference in this setup is that students watching the live stream can become part of the stream and interact with the tutor anytime they want.
Any number of students can raise their hand and transition to become on-stage participants, moving from HLS to the WebRTC connection for direct interaction with the tutor. During this transition, the on-stage participant loses 8 seconds of the live stream. These on-stage participants become part of the HLS stream alongside the tutor, allowing their visibility to the remaining students. Once their interaction concludes, they can go back to being HLS viewers. To prevent them from viewing the 8-second time period of live stream during their presence, a timer can be displayed instead.
Here’s the on-stage participant flow:
Are you interested in building a live classroom using the WebRTC+HLS setup, but don't want to worry about protocols and optimizations? Check out 100ms SDKs, with built-in support for both WebRTC and HLS. The best part is, switching between WebRTC and HLS is as simple as making a function call.
Feel free to book a call with us to understand more.
General
Share