Home
/ Blog /
Building a Twitter Spaces Clone for Android with 100ms SDKMarch 28, 202224 min read
Share
Innovated by Twitter, Twitter Spaces allows users to have live audio conversations on the platform. The intent of Spaces is to let people have authentic and unfiltered discussions on every topic - to audiences of every size.
This article will demonstrate how to build a Twitter Spaces clone with the 100ms Android SDK.
100ms provides web and mobile (native iOS and Android) SDKs designed to add live interactive video to your applications. It enables developers to implement resilient video conferencing into their software with minimal coding effort.
Note: The tutorial uses the MVVM pattern, which isn’t a prerequisite, but I will walk you through it anyway.
In this tutorial, we will focus on adding the following features to our clone:
To accomplish this, we will have two screens: one displaying a list of spaces and the other displaying the room where peers will interact.
Before starting, let’s get acquainted with some common terms that will be frequently used throughout this piece:
The room is the basic object that 100ms SDKs return on a successful connection. You can create a room using either the dashboard or via API.
In this tutorial, we will be creating a room from the 100ms dashboard. A room contains references to peers, tracks, and everything you might need to render a live audio or video app.
Role: A role is a collection of permissions that allows you to perform a specific set of operations while being part of a room. An audio room can have roles such as speaker, moderator, or listener, while a video conference can have roles such as host and guest.
Peer: A peer is an object returned by 100ms SDKs, containing all information about a user: name, role, audio/video tracks, etc.
Track: A track represents either the audio or video published by a peer.
Create an empty compose project and give it a name of your choice.
build.gradle
and add the following dependencies:// 100ms SDK
implementation 'com.github.100mslive.android-sdk:lib:2.3.1'
// Coroutines
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-core:1.6.0'
implementation 'org.jetbrains.kotlinx:kotlinx-coroutines-android:1.6.0'
// Coroutine Lifecycle Scopes
implementation "androidx.lifecycle:lifecycle-viewmodel-ktx:2.4.1"
//Dagger - Hilt
implementation "com.google.dagger:hilt-android:2.38.1"
kapt "com.google.dagger:hilt-android-compiler:2.37"
implementation "androidx.hilt:hilt-lifecycle-viewmodel:1.0.0-alpha03"
kapt "androidx.hilt:hilt-compiler:1.0.0"
implementation 'androidx.hilt:hilt-navigation-compose:1.0.0'
// Retrofit
implementation 'com.squareup.retrofit2:retrofit:2.9.0'
implementation 'com.squareup.retrofit2:converter-gson:2.9.0'
implementation "com.squareup.okhttp3:okhttp:5.0.0-alpha.2"
implementation "com.squareup.okhttp3:logging-interceptor:5.0.0-alpha.2"
// Timber
implementation 'com.jakewharton.timber:timber:5.0.1'
// Navigation
implementation 'io.github.raamcosta.compose-destinations:core:1.3.1-beta'
ksp 'io.github.raamcosta.compose-destinations:ksp:1.3.1-beta'
// Permissions
implementation "com.google.accompanist:accompanist-permissions:0.21.1-beta"
settings.gradle
, add the jitpack repository:dependencyResolutionManagement {
repositories {
maven { url 'https://jitpack.io' }
}
}
You can check out the final project to see all the dependencies that must be added to the project: TwitterSpacesClone.
To use the 100ms SDK, you need to have the necessary credentials.
Visit the 100ms Dashboard and create an account.
Set up your account by:
In this step, we will create the different screens that our app will use.
First, we need to define the Home Screen, which will have Space cards. On clicking one of the Space cards, a field should appear for the name of the user who wishes to enter that particular Space.
This is the code for the SpaceItem composable. Composable functions are the building blocks used to create user interfaces for Android apps when developing them with Jetpack Compose. It can contain the code for a single or multiple UI elements, just as is done in an XML layout.
To obtain the complete code for the HomeScreen, please check out the GitHub repository.
@OptIn(ExperimentalMaterialApi::class)
@Composable
fun SpaceItem(
modifier: Modifier = Modifier,
) {
Card(
modifier = Modifier
.fillMaxWidth()
.height(250.dp)
.padding(10.dp),
elevation = 5.dp,
shape = RoundedCornerShape(8.dp)
) {
Column(modifier = modifier.padding(10.dp)) {
Row(
modifier = Modifier.fillMaxWidth(),
horizontalArrangement = Arrangement.SpaceBetween
) {
Row(
verticalAlignment = Alignment.CenterVertically
) {
Image(
painter = painterResource(id = R.drawable.audio_wave),
modifier = Modifier.size(24.dp),
contentDescription = null
)
Spacer(modifier = Modifier.width(5.dp))
Text(
text = "LIVE",
color = Color.White,
fontSize = 14.sp,
fontWeight = FontWeight.SemiBold
)
}
Icon(
imageVector = Icons.Default.MoreVert,
tint = Color.White,
contentDescription = null
)
}
Spacer(modifier = Modifier.height(10.dp))
Text(
text = "Building a Twitter Spaces Clone with 100ms SDK and Jetpack Compose",
fontSize = 24.sp,
fontWeight = FontWeight.SemiBold,
color = Color.White
)
Spacer(modifier = Modifier.height(10.dp))
Row(
verticalAlignment = Alignment.CenterVertically
) {
Image(
painter = painterResource(id = R.drawable.avatar),
contentScale = ContentScale.Crop,
modifier = Modifier
.size(24.dp)
.clip(CircleShape),
contentDescription = null
)
Spacer(modifier = Modifier.width(8.dp))
Text(
text = "1 Listening",
fontSize = 13.sp,
fontWeight = FontWeight.SemiBold,
color = Color.White
)
}
Spacer(modifier = Modifier.height(16.dp))
Row(
verticalAlignment = Alignment.CenterVertically
) {
Image(
painter = painterResource(id = R.drawable.avatar),
contentScale = ContentScale.Crop,
modifier = Modifier
.size(14.dp)
.clip(CircleShape),
contentDescription = null
)
Spacer(modifier = Modifier.width(5.dp))
Text(
text = "Joel Kanyi",
fontSize = 13.sp,
fontWeight = FontWeight.SemiBold,
color = Color.White
)
Spacer(modifier = Modifier.width(5.dp))
Text(
text = "Host",
style = typography.body1.merge(),
color = Color.White,
modifier = Modifier
.clip(
shape = RoundedCornerShape(
size = 3.dp,
),
)
.background(Color.White.copy(alpha = 0.2f))
.padding(
start = 4.dp,
end = 4.dp,
top = 2.dp,
bottom = 2.dp
)
)
}
}
}
}
Inside a Space (a room), the UI should look something like this:
To represent one peer in the audio room (Space), here is a composable function that I created:
@Composable
fun PeerItem(peer: HMSPeer, viewModel: SpaceViewModel) {
val colors = listOf(
0xFF556b2f,
0xFF5f6f7e,
0xFF8c53c6,
0xFFcc0000,
0xFF8b4513,
)
Column(
Modifier.padding(8.dp),
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
Box(
modifier = Modifier.size(60.dp).clip(CircleShape).background(Color(colors.random())),
contentAlignment = Alignment.Center
){
Text(
text = viewModel.getNameInitials(peer.name),
color = Color.White,
style = MaterialTheme.typography.h6
)
}
Text(
peer.name,
modifier = Modifier
.padding(4.dp)
.fillMaxWidth(),
fontSize = 12.sp,
textAlign = TextAlign.Center,
fontWeight = FontWeight.SemiBold,
)
Row(
verticalAlignment = Alignment.CenterVertically
) {
Icon(
painter = if (peer.audioTrack?.isMute == true) {
painterResource(id = R.drawable.ic_mute_mic)
} else {
painterResource(id = R.drawable.audio_wave)
},
modifier = Modifier
.size(12.dp),
tint = Color.Red,
contentDescription = null
)
Spacer(modifier = Modifier.width(5.dp))
Text(
text = peer.hmsRole.name,
textAlign = TextAlign.Right,
fontSize = 10.sp,
fontWeight = FontWeight.Light
)
}
}
}
For the item on the bottom containing the mute and unmute mic icon (and others), here is the composable function:
@Composable
fun BottomMicItem(
modifier: Modifier = Modifier,
viewModel: SpaceViewModel
) {
val hmsLocalPeer = viewModel.localPeer.value
Row(
modifier = modifier
.padding(8.dp),
verticalAlignment = Alignment.CenterVertically,
horizontalArrangement = Arrangement.SpaceBetween
) {
Row(Modifier.fillMaxWidth(0.2f)) {
Column(
horizontalAlignment = Alignment.CenterHorizontally,
verticalArrangement = Arrangement.Center
) {
IconButton(
onClick = {
if (hmsLocalPeer?.hmsRole?.name == "listener") {
return@IconButton
}
viewModel.setLocalAudioEnabled(
!viewModel.isLocalAudioEnabled()!!
)
},
modifier = Modifier
.size(50.dp)
.border(1.dp, Color.LightGray, shape = CircleShape)
) {
Icon(
painter = painterResource(id = R.drawable.ic_big_mic),
modifier = Modifier
.size(24.dp),
tint = Color.Gray,
contentDescription = null
)
}
Text(
text = when (viewModel.isLocalAudioEnabled()) {
true -> {
"Mic is off"
}
false -> {
"Mic is on"
}
else -> {
"Null"
}
},
fontSize = 10.sp,
color = Color.LightGray
)
}
}
Row(
Modifier.fillMaxWidth(0.8f),
verticalAlignment = Alignment.CenterVertically,
horizontalArrangement = Arrangement.SpaceAround
) {
Icon(
imageVector = Icons.Filled.PeopleOutline,
contentDescription = null
)
Icon(
imageVector = Icons.Filled.FavoriteBorder,
contentDescription = null
)
Icon(
imageVector = Icons.Filled.Share,
contentDescription = null
)
IconButton(
onClick = { /*TODO*/ },
modifier = Modifier
.size(24.dp)
.clip(CircleShape)
.background(TwitterBlue)
) {
Icon(
painter = painterResource(id = R.drawable.ic_feather),
modifier = Modifier
.size(24.dp),
tint = Color.White,
contentDescription = null
)
}
}
}
}
Before joining a room, a user needs to be authenticated. From our app, we will try to obtain the auth token generated for us.
import com.google.gson.annotations.SerializedName
data class TokenRequest(
@SerializedName("room_id")
val roomId: String,
@SerializedName("user_id")
val userId: String,
@SerializedName("role")
val role: String = "listener",
)
import com.google.gson.annotations.SerializedName
data class TokenResponse(
@SerializedName("token")
val token: String
)
interface TokenRequestApi {
@POST("api/token")
suspend fun getToken(@Body tokenRequest: TokenRequest): TokenResponse
}
This app module provides the following dependencies:
@Module
@InstallIn(SingletonComponent::class)
object AppModule {
@Singleton
@Provides
fun provideLoggingInterceptor(): HttpLoggingInterceptor {
return HttpLoggingInterceptor().setLevel(HttpLoggingInterceptor.Level.BODY)
}
@Provides
@Singleton
fun provideOkHttpClient(httpLoggingInterceptor: HttpLoggingInterceptor): OkHttpClient {
val okHttpClient = OkHttpClient.Builder()
.addInterceptor(httpLoggingInterceptor)
.callTimeout(15, TimeUnit.SECONDS)
.connectTimeout(15, TimeUnit.SECONDS)
.writeTimeout(15, TimeUnit.SECONDS)
.readTimeout(15, TimeUnit.SECONDS)
return okHttpClient.build()
}
@Provides
@Singleton
fun provideToken(okHttpClient: OkHttpClient): TokenRequestApi {
return Retrofit.Builder()
.baseUrl(TOKEN_ENDPOINT)
.addConverterFactory(GsonConverterFactory.create())
.client(okHttpClient)
.build()
.create(TokenRequestApi::class.java)
}
@Singleton
@Provides
fun provideHMSSdk(application: Application): HMSSDK {
return HMSSDK.Builder(application)
.build()
}
@Provides
@Singleton
fun provideSpaceRepository(api: TokenRequestApi, hmssdk: HMSSDK): SpaceRepository {
return SpaceRepository(api, hmssdk)
}
}
Here, we will go through the process of allowing users to join and leave a Space (Room). Create a repository and inject the HMSSDK and the TokenRequestApi.
This is where you can change the role so that a peer joins with a particular set of permissions. The role can either be a speaker, moderator, or listener.
Note: You need to copy the roomID
from your dashboard.
suspend fun requestToken(
userName: String,
roomId: String = "620b0d326f2b876d58ef3bc7"
): TokenResponse {
return api.getToken(TokenRequest(userId = userName, roomId = roomId, role = "speaker"))
}
A user can interact with participants of a room only after joining it. When a user indicates that they want to join a room, your app should have the following:
fun joinRoom(userName: String, authToken: String, updateListener: HMSUpdateListener) {
val config = HMSConfig(
userName = userName,
authtoken = authToken
)
hmsSdk.join(config, updateListener)
}
Once you're done with a call and want to exit the room (Space), call leave on the HMSSDK instance you created to join it.
fun leaveRoom() {
hmsSdk.leave()
}
The Mute function applies to both audio and video. When you mute audio, you can't be heard by other people. To help the speaker toggle mute and unmute features while speaking, let's define a function that will allow this.
fun setLocalAudioEnabled(enabled: Boolean) {
hmsSdk.getLocalPeer()?.audioTrack?.apply {
setMute(!enabled)
}
}
To observe the state of the audio of the local peer, use this code:
fun isLocalAudioEnabled(): Boolean? {
return hmsSdk.getLocalPeer()?.audioTrack?.isMute?.not()
}
fun leaveTheSpace() {
repository.leaveRoom()
}
fun setLocalAudioEnabled(enabled: Boolean) {
repository.setLocalAudioEnabled(enabled)
}
fun isLocalAudioEnabled(): Boolean {
return repository.isLocalAudioEnabled() == true
}
The HMSUpdateListener
has several methods we can use to update our UI accordingly.
fun startMeeting(name: String) {
loading = true
viewModelScope.launch {
val token = repository.requestToken(name).token
repository.joinRoom(name,token,object : HMSUpdateListener {
override fun onChangeTrackStateRequest(details: HMSChangeTrackStateRequest) {
Timber.d("onChangeTrackStateRequest, track: ${details.track}, requestedBy: ${details.requestedBy}, mute: ${details.mute}")
}
override fun onError(error: HMSException) {
loading = false
Timber.d("An error occurred: ${error.message}")
}
override fun onJoin(room: HMSRoom) {
Timber.d("onJoin: ${room.name}")
loading = false
_peers.value = room.peerList.asList()
_localPeer.value = room.localPeer
}
override fun onMessageReceived(message: HMSMessage) {
Timber.d("Message: ${message.message}")
}
override fun onPeerUpdate(type: HMSPeerUpdate, peer: HMSPeer) {
Timber.d("There was a peer update: $type peer: $peer")
// Handle peer updates.
when (type) {
HMSPeerUpdate.PEER_JOINED -> _peers.value =_peers.value.plus(peer) HMSPeerUpdate.PEER_LEFT -> _peers.value =_peers.value.filter { currentPeer -> currentPeer.peerID != peer.peerID }
}
}
override fun onRoleChangeRequest(request: HMSRoleChangeRequest) {
Timber.d("Role change request: suggested role: ${request.suggestedRole}, by: ${request.requestedBy} ")
}
override fun onRoomUpdate(type: HMSRoomUpdate, hmsRoom: HMSRoom) {
Timber.d("Room update: type: ${type.name} room: ${hmsRoom.name}")
}
override fun onTrackUpdate(type: HMSTrackUpdate, track: HMSTrack,peer: HMSPeer) {
Timber.d("Somebody's audio/video changed: type: $type, track: $track, peer: $peer")
when (type) {
HMSTrackUpdate.TRACK_REMOVED -> {
Timber.d("Checking, $type, $track")
if (track.type == HMSTrackType.AUDIO) {
_peers.value =_peers.value.filter { currentPeer -> currentPeer.peerID != peer.peerID }
.plus(peer)
} else {
Timber.d("Not processed, $type, $track")
}
}
HMSTrackUpdate.TRACK_DESCRIPTION_CHANGED -> Timber.d("Other mute/unmute $type, $track")
}
}
})
}
}
When you call this method in SpaceScreen, update your composable and run the app. The result should be something similar to this:
If you are interested, have a look a this GitHub repository containing the full implementation of this clone.
Engineering
Share
Related articles
See all articles