使用 Android WebRTC 和 Node.js 中的 WebSocket 服务器实现视频通话

本文将研究如何在应用程序中实现视频通话,而无需使用外部 SDK,让我们保持视频通话集成的简单性。

注:关于 WebRTC、ICE、SDP 和其他一些概念的基础知识这里不展开。

前提条件

  • Android Studio: 确保已安装 Android Studio,以便构建和运行 Android 应用程序。
  • Node.js:安装 Node.js,以运行 WebSocket 服务器。

现在,我们将用 Node.js 编写一个 WebSocket 服务器。

WebSocket 是聊天应用程序和网页实时更新的首选。它适用于在网站和服务器之间实时发送信息(文本或二进制)。另一方面,当涉及到视频通话时,您需要在两台设备之间进行直接的视频和音频通信,而中间不需要服务器,这就是 WebRTC 框架发挥作用的地方。它可用于实时通信,处理从视频通话、屏幕共享到实时流媒体等一切事务。

设置 WebSocket 服务器

创建一个新文件,命名为 server.js,然后将以下 Node.js 代码复制到该文件中:

const http = require("http");
const WebSocketServer = require("websocket").server;

const server = http.createServer((req, res) => {
    res.writeHead(200, { 'Content-Type': 'text/plain' });
    res.end('WebSocket server is running.');
});

const PORT = process.env.PORT || 2000;

server.listen(PORT, () => {
    console.log(`Server Started On Port ${PORT}`);
});

const webSocket = new WebSocketServer({ httpServer: server });

const users = [];

webSocket.on('request', (req) => {
    const connection = req.accept();

    connection.on('message', (message) => {
        const data = JSON.parse(message.utf8Data);
        const user = findUser(data.name);

        switch(data.type) {
            case "STORE_USER":
                if(user !== null) {
                    connection.send(JSON.stringify({
                        type: 'user already exists'
                    }));
                    return;
                }

                const newUser = {
                    name: data.name,
                    conn: connection
                };
                users.push(newUser);
                break;

            case "START_CALL":
                let userToCall = findUser(data.target);

                if(userToCall) {
                    connection.send(JSON.stringify({
                        type: "CALL_RESPONSE",
                        data: "user is ready for call"
                    }));
                } else {
                    connection.send(JSON.stringify({
                        type: "CALL_RESPONSE",
                        data: "user is not online"
                    }));
                }
                break;

            case "CREATE_OFFER":
                let userToReceiveOffer = findUser(data.target);

                if(userToReceiveOffer) {
                    userToReceiveOffer.conn.send(JSON.stringify({
                        type: "OFFER_RECIEVED",
                        name: data.name,
                        data: data.data.sdp
                    }));
                }
                break;

            case "CREATE_ANSWER":
                let userToReceiveAnswer = findUser(data.target);
                if(userToReceiveAnswer) {
                    userToReceiveAnswer.conn.send(JSON.stringify({
                        type: "ANSWER_RECIEVED",
                        name: data.name,
                        data: data.data.sdp
                    }));
                }
                break;

            case "ICE_CANDIDATE":
                let userToReceiveIceCandidate = findUser(data.target);
                if(userToReceiveIceCandidate) {
                    userToReceiveIceCandidate.conn.send(JSON.stringify({
                        type: "ICE_CANDIDATE",
                        name: data.name,
                        data: {
                            sdpMLineIndex: data.data.sdpMLineIndex,
                            sdpMid: data.data.sdpMid,
                            sdpCandidate: data.data.sdpCandidate
                        }
                    }));
                }
                break;

            case "CALL_ENDED":
                let userToNotifyCallEnded = findUser(data.target);
                if(userToNotifyCallEnded) {
                    userToNotifyCallEnded.conn.send(JSON.stringify({
                        type: "CALL_ENDED",
                        name: data.name
                    }));
                }
                break;
        }
    });

    connection.on('close', () => {
        users.forEach(user => {
            if(user.conn === connection) {
                users.splice(users.indexOf(user), 1);
            }
        });
    });
});

const findUser = (username) => {
    for(let i = 0; i < users.length; i++) {
        if(users[i].name === username)
            return users[i];
    }
};

该代码创建了一个简单的 WebSocket 服务器,用于处理用户注册、呼叫启动、报价创建、答案创建、ice candidate 交换和呼叫结束。

使用命令运行服务器:

node server.js

您的 WebSocket 服务器现在运行在 http://localhost:2000 上。

请确保将此 URL 替换为您的服务器 URL(您上传 WebSocket 服务器的位置)。我已经在 Heroku 上部署了 WebSocket 服务器,但我们不会在本文中介绍部署过程。你可以在 Google 或 YouTube 上轻松找到如何在 Heroku 上传 WebSocket 服务器的教程。

Android 应用程序与 WebRTC 的集成

在将 WebRTC 集成到 Android 应用程序之前,请允许我重申,这是以我的医疗保健应用程序为基础的。这可能与您的应用程序要求略有不同。以下是我的应用程序的工作原理: 我在 Firebase Firestore 上存储用户和医生信息。当用户与医生安排视频通话预约时,他们可以互相发起视频通话。怎么做?所有操作都是通过他们唯一的用户 ID(uids)完成的,您可以查看代码中的详细信息。

1. 项目设置

打开 Android Studio,创建一个新的 Android 项目。在 build.gradle 文件(Module: app)中,为 WebRTC 添加以下依赖关系:

   implementation 'com.mesibo.api:webrtc:1.0.5'
   implementation 'org.java-websocket:Java-WebSocket:1.5.3'

   //Easy Permissions
   implementation 'com.guolindev.permissionx:permissionx:1.6.1'

确保在AndroidManifest.xml文件中配置了必要的权限,包括摄像头和麦克风权限。

  <uses-feature
        android:name="android.hardware.camera"
        android:required="true" />

    <uses-permission android:name="android.permission.INTERNET" />
    <uses-permission android:name="android.permission.FOREGROUND_SERVICE" />
    <uses-permission android:name="android.permission.RECORD_AUDIO" />
    <uses-permission android:name="android.permission.CAMERA" />
    <uses-permission android:name="android.permission.MODIFY_AUDIO_SETTINGS"/>
    <uses-permission
        android:name="android.permission.CAPTURE_VIDEO_OUTPUT"
        tools:ignore="ProtectedPermissions" />

2. 设计视频通话UI

<com.google.android.material.card.MaterialCardView
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                app:layout_constraintEnd_toEndOf="parent"
                app:cardElevation="20dp"
                android:layout_marginStart="20dp"
                app:strokeWidth="0dp"
                android:layout_marginEnd="20dp"
                app:cardBackgroundColor="@color/specialistCardBackgroundColor"
                app:layout_constraintStart_toStartOf="parent"
                app:layout_constraintTop_toTopOf="parent">

                <LinearLayout
                    android:id="@+id/incomingCallLayout"
                    android:layout_width="match_parent"
                    android:layout_height="80dp"
                    android:layout_alignParentTop="true"
                    android:orientation="horizontal"
                    android:visibility="gone"
                    android:layout_marginStart="20dp"
                    android:layout_marginEnd="20dp"
                    >

                    <TextView
                        android:id="@+id/incomingName"
                        android:layout_width="0dp"
                        android:layout_height="match_parent"
                        android:layout_weight="6"
                        android:gravity="center"
                        android:text="someone is calling"
                        android:textColor="@color/black"
                        android:textSize="20sp" />

                    <ImageView
                        android:id="@+id/acceptButton"
                        android:layout_width="0dp"
                        android:layout_height="match_parent"
                        android:layout_weight="1.5"
                        android:padding="15dp"
                        android:src="@drawable/ic_accept" />

                    <ImageView
                        android:id="@+id/rejectButton"
                        android:layout_width="0dp"
                        android:layout_height="match_parent"
                        android:layout_weight="1.5"
                        android:padding="15dp"
                        android:src="@drawable/ic_reject" />

                </LinearLayout>

            </com.google.android.material.card.MaterialCardView>
 <RelativeLayout
        android:layout_width="match_parent"
        android:id="@+id/callLayout"
        android:visibility="gone"
        android:elevation="40dp"
        android:layout_height="match_parent">
        <org.webrtc.SurfaceViewRenderer
            android:id="@+id/remote_view"
            android:layout_width="match_parent"
            android:layout_height="match_parent" />

        <org.webrtc.SurfaceViewRenderer
            android:id="@+id/local_view"
            android:layout_width="120dp"
            android:layout_height="150dp"
            android:layout_above="@+id/controls"
            android:layout_marginStart="8dp"
            android:layout_marginTop="8dp"
            android:layout_marginEnd="8dp"
            android:layout_marginBottom="8dp"
            android:elevation="16dp" />
        <ProgressBar
            android:layout_centerInParent="true"
            android:id="@+id/remote_view_loading"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:indeterminate="true"
            app:layout_constraintBottom_toBottomOf="@id/remote_view"
            app:layout_constraintEnd_toEndOf="@id/remote_view"
            app:layout_constraintStart_toStartOf="@id/remote_view"
            app:layout_constraintTop_toTopOf="@id/remote_view" />
        <HorizontalScrollView
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            android:layout_centerHorizontal="true"
            android:layout_alignParentBottom="true">


        <LinearLayout
            android:gravity="center"
            android:background="@drawable/curve_background"
            android:backgroundTint="@android:color/secondary_text_light"
            android:id="@+id/controls"
            android:orientation="horizontal"
            android:layout_width="match_parent"
            android:layout_height="wrap_content">
            <ImageView
                android:id="@+id/mic_button"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginTop="16dp"
                android:layout_marginLeft="16dp"
                android:layout_marginBottom="16dp"
                android:layout_marginRight="2dp"
                android:clickable="true"
                android:focusable="true"
                android:padding="12dp"
                android:background="@drawable/circle_background"
                app:backgroundTint="@color/cardview_dark_background"
                app:srcCompat="@drawable/ic_baseline_mic_24" />
            <ImageView
                android:id="@+id/video_button"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginTop="16dp"
                android:layout_marginLeft="16dp"
                android:layout_marginBottom="16dp"
                android:layout_marginRight="2dp"
                android:clickable="true"
                android:focusable="true"
                android:padding="12dp"
                android:background="@drawable/circle_background"
                app:backgroundTint="@color/cardview_dark_background"
                app:srcCompat="@drawable/ic_baseline_videocam_24" />
            <ImageView
                android:id="@+id/end_call_button"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginTop="16dp"
                android:layout_marginLeft="16dp"
                android:layout_marginBottom="16dp"
                android:layout_marginRight="2dp"
                android:clickable="true"
                android:focusable="true"
                android:padding="12dp"
                android:background="@drawable/circle_background"
                app:backgroundTint="@android:color/holo_red_dark"
                app:srcCompat="@drawable/ic_baseline_call_end_24" />
            <ImageView
                android:id="@+id/switch_camera_button"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:layout_marginTop="16dp"
                android:layout_marginLeft="16dp"
                android:layout_marginBottom="16dp"
                android:layout_marginRight="2dp"
                android:clickable="true"
                android:focusable="true"
                android:padding="12dp"
                android:background="@drawable/circle_background"
                app:backgroundTint="@color/cardview_dark_background"
                app:srcCompat="@drawable/ic_baseline_cameraswitch_24" />
            <ImageView
                android:id="@+id/audio_output_button"
                android:layout_width="wrap_content"
                android:layout_height="wrap_content"
                android:clickable="true"
                android:focusable="true"
                android:layout_marginTop="16dp"
                android:layout_marginLeft="16dp"
                android:layout_marginBottom="16dp"
                android:layout_marginRight="2dp"
                android:padding="12dp"
                android:background="@drawable/circle_background"
                app:backgroundTint="@color/cardview_dark_background"
                app:srcCompat="@drawable/ic_baseline_speaker_up_24" />
        </LinearLayout>
        </HorizontalScrollView>
    </RelativeLayout>

以下是此 UI 设计中使用的所有资源的链接:- https://drive.google.com/drive/folders/1VIj1Nvp8zNcyUYi9XSLFie1In0HLWeor?usp=sharing

3.创建RTCClient类

此类将处理WebRTC功能,例如创建对等连接、处理本地和远程流以及管理呼叫生命周期。

class RTCClient(
    private val application: Application,
    private val username: String,
    private val webSocketManager: WebSocketManager,
    private val observer: PeerConnection.Observer
) {

    init {
        initPeerConnectionFactory(application)
    }

    /*
      The code initializes essential components of WebRTC, including the 
      EglBase for rendering, PeerConnectionFactory for managing peer 
      connections, ICE servers for STUN and TURN, and local media sources.
    */

    private val eglContext = EglBase.create()
    private val peerConnectionFactory by lazy { createPeerConnectionFactory() }
    private val iceServer = listOf(
        PeerConnection.IceServer.builder("stun:iphone-stun.strato-iphone.de:3478")
            .createIceServer(),
        PeerConnection.IceServer("stun:openrelay.metered.ca:80"),
        PeerConnection.IceServer(
            "turn:openrelay.metered.ca:80",
            "openrelayproject",
            "openrelayproject"
        ),
        PeerConnection.IceServer(
            "turn:openrelay.metered.ca:443",
            "openrelayproject",
            "openrelayproject"
        ),
        PeerConnection.IceServer(
            "turn:openrelay.metered.ca:443?transport=tcp",
            "openrelayproject",
            "openrelayproject"
        ),

        )
    private val peerConnection by lazy { createPeerConnection(observer) }
    private val localVideoSource by lazy { peerConnectionFactory.createVideoSource(false) }
    private val localAudioSource by lazy { peerConnectionFactory.createAudioSource(MediaConstraints()) }
    private var videoCapturer: CameraVideoCapturer? = null
    private var localAudioTrack: AudioTrack? = null
    private var localVideoTrack: VideoTrack? = null


    /*
      These next three functions handle the initialization and creation of the 
      PeerConnectionFactory and PeerConnection instances with appropriate 
      configurations.
    */
    private fun initPeerConnectionFactory(application: Application) {
        val peerConnectionOption = PeerConnectionFactory.InitializationOptions.builder(application)
            .setEnableInternalTracer(true)
            .setFieldTrials("WebRTC-H264HighProfile/Enabled/")
            .createInitializationOptions()

        PeerConnectionFactory.initialize(peerConnectionOption)
    }

    private fun createPeerConnectionFactory(): PeerConnectionFactory {
        return PeerConnectionFactory.builder()
            .setVideoEncoderFactory(
                DefaultVideoEncoderFactory(
                    eglContext.eglBaseContext,
                    true,
                    true
                )
            )
            .setVideoDecoderFactory(DefaultVideoDecoderFactory(eglContext.eglBaseContext))
            .setOptions(PeerConnectionFactory.Options().apply {
                disableEncryption = true
                disableNetworkMonitor = true
            }).setAudioDeviceModule(
                JavaAudioDeviceModule.builder(application)
                    .setUseHardwareAcousticEchoCanceler(Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q)
                    .setUseHardwareNoiseSuppressor(Build.VERSION.SDK_INT >= Build.VERSION_CODES.Q)
                    .createAudioDeviceModule().also {
                        it.setMicrophoneMute(false)
                        it.setSpeakerMute(false)
                    }
            ).createPeerConnectionFactory()
    }

    private fun createPeerConnection(observer: PeerConnection.Observer): PeerConnection? {
        return peerConnectionFactory.createPeerConnection(iceServer, observer)
    }

   /*
    These next three functions handle the initialization of a SurfaceViewRenderer 
    for local video and starting the local video capture.
   */

    fun initializeSurfaceView(surface: SurfaceViewRenderer) {
        surface.run {
            setEnableHardwareScaler(true)
            setMirror(true)
            init(eglContext.eglBaseContext, null)
        }
    }

    fun startLocalVideo(surface: SurfaceViewRenderer) {
        val surfaceTextureHelper =
            SurfaceTextureHelper.create(Thread.currentThread().name, eglContext.eglBaseContext)
        videoCapturer = getVideoCapturer(application)
        videoCapturer?.initialize(
            surfaceTextureHelper,
            surface.context, localVideoSource.capturerObserver
        )
        videoCapturer?.startCapture(320, 240, 30)
        localVideoTrack = peerConnectionFactory.createVideoTrack("local_track", localVideoSource)
        localVideoTrack?.addSink(surface)
        localAudioTrack =
            peerConnectionFactory.createAudioTrack("local_track_audio", localAudioSource)
        val localStream = peerConnectionFactory.createLocalMediaStream("local_stream")
        localStream.addTrack(localAudioTrack)
        localStream.addTrack(localVideoTrack)

        peerConnection?.addStream(localStream)

    }

    private fun getVideoCapturer(application: Application): CameraVideoCapturer {
        return Camera2Enumerator(application).run {
            deviceNames.find {
                isFrontFacing(it)
            }?.let {
                createCapturer(it, null)
            } ?: throw IllegalStateException()
        }
    }

    /*
      These functions are responsible for initiating a call, handling the 
      reception of a remote session description, and answering an incoming 
      call.  
    */

    fun call(target: String) {
        val mediaConstraints = MediaConstraints()
        mediaConstraints.mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))


        peerConnection?.createOffer(object : SdpObserver {
            override fun onCreateSuccess(desc: SessionDescription?) {
                peerConnection?.setLocalDescription(object : SdpObserver {
                    override fun onCreateSuccess(p0: SessionDescription?) {

                    }

                    override fun onSetSuccess() {
                        val offer = hashMapOf(
                            "sdp" to desc?.description,
                            "type" to desc?.type
                        )

                        webSocketManager.sendMessageToSocket(
                            MessageModel(
                                TYPE.CREATE_OFFER, username, target, offer
                            )
                        )
                    }

                    override fun onCreateFailure(p0: String?) {
                    }

                    override fun onSetFailure(p0: String?) {
                    }

                }, desc)

            }

            override fun onSetSuccess() {
            }

            override fun onCreateFailure(p0: String?) {
            }

            override fun onSetFailure(p0: String?) {
            }
        }, mediaConstraints)
    }

    fun onRemoteSessionReceived(session: SessionDescription) {
        peerConnection?.setRemoteDescription(object : SdpObserver {
            override fun onCreateSuccess(p0: SessionDescription?) {

            }

            override fun onSetSuccess() {
            }

            override fun onCreateFailure(p0: String?) {
            }

            override fun onSetFailure(p0: String?) {
            }

        }, session)

    }

    fun answer(target: String) {
        val constraints = MediaConstraints()
        constraints.mandatory.add(MediaConstraints.KeyValuePair("OfferToReceiveVideo", "true"))

        peerConnection?.createAnswer(object : SdpObserver {
            override fun onCreateSuccess(desc: SessionDescription?) {
                peerConnection?.setLocalDescription(object : SdpObserver {
                    override fun onCreateSuccess(p0: SessionDescription?) {
                    }


                    override fun onSetSuccess() {
                        val answer = hashMapOf(
                            "sdp" to desc?.description,
                            "type" to desc?.type
                        )
                        webSocketManager.sendMessageToSocket(
                            MessageModel(
                                TYPE.CREATE_ANSWER, username, target, answer
                            )
                        )
                    }

                    override fun onCreateFailure(p0: String?) {
                    }

                    override fun onSetFailure(p0: String?) {
                    }

                }, desc)
            }

            override fun onSetSuccess() {
            }

            override fun onCreateFailure(p0: String?) {
            }

            override fun onSetFailure(p0: String?) {
            }

        }, constraints)
    }

    /*
      These functions handle the addition of ICE candidates, camera switching, 
      audio toggling, video toggling, and ending a call.
    */

    fun addIceCandidate(p0: IceCandidate?) {
        peerConnection?.addIceCandidate(p0)
    }

    fun switchCamera() {
        videoCapturer?.switchCamera(null)
    }

    fun toggleAudio(mute: Boolean) {
        localAudioTrack?.setEnabled(mute)
    }

    fun toggleCamera(cameraPause: Boolean) {
        localVideoTrack?.setEnabled(cameraPause)
    }

    fun endCall() {
        peerConnection?.close()
    }
}

4. 创建WebSocketManager类

此类负责管理 Android 应用程序中的 WebSocket 通信。它有助于初始化 WebSocket 连接、向 WebSocket 服务器发送消息以及处理传入消息。

class WebSocketManager(private val messageInterface:NewMessageInterface) {
    private var webSocket:WebSocketClient?=null
    private var UID:String?=null
    private val gson = Gson()

    fun initSocket(uid:String){
        UID = uid

        webSocket = object:WebSocketClient(URI("wss://YOUR_URL.herokuapp.com/")){
            override fun onOpen(handshakedata: ServerHandshake?) {
                sendMessageToSocket(
                    MessageModel(
                        TYPE.STORE_USER,uid,null,null
                    )
                )
                Log.d(VIDEOCALLINGWEBRTC,"HANDSHAKEDATA:- ${handshakedata.toString()}")
            }

            override fun onMessage(message: String?) {
                try {
                    messageInterface.onNewMessage(gson.fromJson(message,MessageModel::class.java))
                    Log.d(VIDEOCALLINGWEBRTC,"ONNEWMESSAGE:- ${message.toString()}")
                }catch (e:Exception){
                    e.printStackTrace()
                    Log.d(VIDEOCALLINGWEBRTC,"EXCEPTION:- ${e.message.toString()}")
                }
            }

            override fun onClose(code: Int, reason: String?, remote: Boolean) {
                Log.d(VIDEOCALLINGWEBRTC, "onClose: $reason")
            }

            override fun onError(ex: Exception?) {
                Log.d(VIDEOCALLINGWEBRTC, "onError: $ex")
            }

        }
        webSocket?.connect()
        Log.d(VIDEOCALLINGWEBRTC,"Connection:- ${webSocket?.socket?.isConnected.toString()}")
    }

    /*
      The sendMessageToSocket function is responsible for converting a 
      MessageModel object to JSON and sending it to the WebSocket server.
    */

    fun sendMessageToSocket(message: MessageModel) {
        try {
            Log.d(VIDEOCALLINGWEBRTC, "sendMessageToSocket: $message")
            webSocket?.send(Gson().toJson(message))
            Log.d(VIDEOCALLINGWEBRTC, "sendMessageToSocket JSON FORMAT: ${Gson().toJson(message)}")
        } catch (e: Exception) {
            Log.d(VIDEOCALLINGWEBRTC, "sendMessageToSocket: $e")
        }
    }
}

5. 创建 data class, IceCandidate class, and enum class

data class MessageModel(
    val type: TYPE, // An enum of type TYPE indicating the type of the message. It specifies the purpose or category of the message.
    val name: String? = null, // A nullable String representing the name associated with the message. It is used in scenarios such as storing a user or indicating the caller's or callee's name.
    val target: String? = null, // A nullable String representing the target user for the message. It signifies the intended recipient of the message.
    val data:Any?=null // A generic Any type representing additional data associated with the message. The actual content of data depends on the message type.
)
enum class TYPE {
    STORE_USER, // Indicates a message for storing a user on the server.
    START_CALL,
    CALL_RESPONSE,
    CREATE_OFFER, // Signifies the creation of an offer in WebRTC.
    OFFER_RECIEVED, // Indicates the reception of an offer in WebRTC.
    CREATE_ANSWER, // Signifies the creation of an answer in WebRTC.
    ANSWER_RECIEVED, // ndicates the reception of an answer in WebRTC.
    ICE_CANDIDATE, // Represents the exchange of ICE candidates in WebRTC.
    CALL_ENDED
}
class IceCandidateModel(
    val sdpMid:String, // A String representing the SDP (Session Description Protocol) mid of the ICE candidate.
    val sdpMLineIndex:Double, //  A Double representing the SDP MLine index of the ICE candidate.
    val sdpCandidate:String //  A String representing the SDP candidate string of the ICE candidate.
) {
}

6. 创建NewMessage接口

接口充当回调机制,允许实现它的类或组件接收我们从 WebSocket 服务器获取的新消息并做出反应。

interface NewMessageInterface {
    fun onNewMessage(message: MessageModel)
}

7. 创建 PeerConnectionObserver

通过扩展此类并重写特定方法,可以自定义应用程序的行为以响应不同的对等连接事件。

open class PeerConnectionObserver : PeerConnection.Observer{
    override fun onSignalingChange(p0: PeerConnection.SignalingState?) {
    }

    override fun onIceConnectionChange(p0: PeerConnection.IceConnectionState?) {
    }

    override fun onIceConnectionReceivingChange(p0: Boolean) {
    }

    override fun onIceGatheringChange(p0: PeerConnection.IceGatheringState?) {
    }

    override fun onIceCandidate(p0: IceCandidate?) {
    }

    override fun onIceCandidatesRemoved(p0: Array<out IceCandidate>?) {
    }

    override fun onAddStream(p0: MediaStream?) {
    }

    override fun onRemoveStream(p0: MediaStream?) {
    }

    override fun onDataChannel(p0: DataChannel?) {
    }

    override fun onRenegotiationNeeded() {
    }

    override fun onAddTrack(p0: RtpReceiver?, p1: Array<out MediaStream>?) {
    }
}

8. 创建 RtcAudioManager 类

RtcAudioManager 类旨在管理 WebRTC 应用程序中的音频相关功能。它处理音频设备(如免提电话、有线耳机和听筒)的选择和管理。

将该类添加到您的项目中,只需将其复制并粘贴到您的代码库中即可。无需深入研究该类的细节。

class RtcAudioManager(context: android.content.Context) {
    enum class AudioDevice {
        SPEAKER_PHONE, WIRED_HEADSET, EARPIECE, NONE
    }

    /** AudioManager state.  */
    enum class AudioManagerState {
        UNINITIALIZED, PREINITIALIZED, RUNNING
    }

    /** Selected audio device change event.  */
    interface AudioManagerEvents {
        // Callback fired once audio device is changed or list of available audio devices changed.
        fun onAudioDeviceChanged(
            selectedAudioDevice: AudioDevice?, availableAudioDevices: Set<AudioDevice?>?
        )
    }

    private val apprtcContext: android.content.Context

    private val audioManager: AudioManager

    @Nullable
    private var audioManagerEvents: AudioManagerEvents? = null
    private var amState: AudioManagerState
    private var savedAudioMode = AudioManager.MODE_INVALID
    private var savedIsSpeakerPhoneOn = false
    private var savedIsMicrophoneMute = false
    private var hasWiredHeadset = false

    // Default audio device; speaker phone for video calls or earpiece for audio
    // only calls.
    private var defaultAudioDevice: AudioDevice? = null

    // Contains the currently selected audio device.
    // This device is changed automatically using a certain scheme where e.g.
    // a wired headset "wins" over speaker phone. It is also possible for a
    // user to explicitly select a device (and overrid any predefined scheme).
    // See |userSelectedAudioDevice| for details.
    private var selectedAudioDevice: AudioDevice? = null

    // Contains the user-selected audio device which overrides the predefined
    // selection scheme.
    private var userSelectedAudioDevice: AudioDevice? = null

    // Contains speakerphone setting: auto, true or false
    @Nullable
    private val useSpeakerphone: String?


    // Contains a list of available audio devices. A Set collection is used to
    // avoid duplicate elements.
    private var audioDevices: MutableSet<AudioDevice?> = HashSet()

    // Broadcast receiver for wired headset intent broadcasts.
    private val wiredHeadsetReceiver: BroadcastReceiver

    // Callback method for changes in audio focus.
    @Nullable
    private var audioFocusChangeListener: AudioManager.OnAudioFocusChangeListener? = null


    /* Receiver which handles changes in wired headset availability. */
    private inner class WiredHeadsetReceiver : BroadcastReceiver() {
        override fun onReceive(p0: android.content.Context?, intent: Intent?) {
            val state = intent?.getIntExtra("state", STATE_UNPLUGGED)
            val microphone = intent?.getIntExtra("microphone", HAS_NO_MIC)
            val name = intent?.getStringExtra("name")
            Log.d(
                TAG, "WiredHeadsetReceiver.onReceive"
                        + ": " + "a=" + intent?.action.toString() + ", s=" +
                        (if (state == STATE_UNPLUGGED) "unplugged" else "plugged").toString()
                        + ", m=" + (if (microphone == HAS_MIC) "mic" else "no mic").toString()
                        + ", n=" + name.toString() + ", sb=" + isInitialStickyBroadcast
            )
            hasWiredHeadset = (state == STATE_PLUGGED)
            updateAudioDeviceState()
        }

        private val STATE_UNPLUGGED = 0
        private val STATE_PLUGGED = 1
        private val HAS_NO_MIC = 0
        private val HAS_MIC = 1
    }

    fun start(audioManagerEvents: AudioManagerEvents?) {
        Log.d(TAG, "start")
        ThreadUtils.checkIsOnMainThread()
        if (amState == AudioManagerState.RUNNING) {
            Log.e(TAG, "AudioManager is already active")
            return
        }
//        else if (amState == AudioManagerState.UNINITIALIZED) {
//            preInitAudio()
//        }
        // TODO perhaps call new method called preInitAudio() here if UNINITIALIZED.
        Log.d(TAG, "AudioManager starts...")
        this.audioManagerEvents = audioManagerEvents
        amState = AudioManagerState.RUNNING

        // Store current audio state so we can restore it when stop() is called.
        savedAudioMode = audioManager.mode
        savedIsSpeakerPhoneOn = audioManager.isSpeakerphoneOn
        savedIsMicrophoneMute = audioManager.isMicrophoneMute
        hasWiredHeadset = hasWiredHeadset()

        // Create an AudioManager.OnAudioFocusChangeListener instance.
        audioFocusChangeListener =
            AudioManager.OnAudioFocusChangeListener { focusChange ->

                // Called on the listener to notify if the audio focus for this listener has been changed.
                // The |focusChange| value indicates whether the focus was gained, whether the focus was lost,
                // and whether that loss is transient, or whether the new focus holder will hold it for an
                // unknown amount of time.

                val typeOfChange: String
                when (focusChange) {
                    AudioManager.AUDIOFOCUS_GAIN -> typeOfChange = "AUDIOFOCUS_GAIN"
                    AudioManager.AUDIOFOCUS_GAIN_TRANSIENT -> typeOfChange =
                        "AUDIOFOCUS_GAIN_TRANSIENT"

                    AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE -> typeOfChange =
                        "AUDIOFOCUS_GAIN_TRANSIENT_EXCLUSIVE"

                    AudioManager.AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK -> typeOfChange =
                        "AUDIOFOCUS_GAIN_TRANSIENT_MAY_DUCK"

                    AudioManager.AUDIOFOCUS_LOSS -> typeOfChange = "AUDIOFOCUS_LOSS"
                    AudioManager.AUDIOFOCUS_LOSS_TRANSIENT -> typeOfChange =
                        "AUDIOFOCUS_LOSS_TRANSIENT"

                    AudioManager.AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK -> typeOfChange =
                        "AUDIOFOCUS_LOSS_TRANSIENT_CAN_DUCK"

                    else -> typeOfChange = "AUDIOFOCUS_INVALID"
                }
                Log.d(TAG, "onAudioFocusChange: $typeOfChange")
            }

        // Request audio playout focus (without ducking) and install listener for changes in focus.
        val result = audioManager.requestAudioFocus(
            audioFocusChangeListener,
            AudioManager.STREAM_VOICE_CALL, AudioManager.AUDIOFOCUS_GAIN_TRANSIENT
        )
        if (result == AudioManager.AUDIOFOCUS_REQUEST_GRANTED) {
            Log.d(TAG, "Audio focus request granted for VOICE_CALL streams")
        } else {
            Log.e(TAG, "Audio focus request failed")
        }

        // Start by setting MODE_IN_COMMUNICATION as default audio mode. It is
        // required to be in this mode when playout and/or recording starts for
        // best possible VoIP performance.
        audioManager.mode = AudioManager.MODE_IN_COMMUNICATION

        // Always disable microphone mute during a WebRTC call.
        setMicrophoneMute(false)

        // Set initial device states.
        userSelectedAudioDevice = AudioDevice.NONE
        selectedAudioDevice = AudioDevice.NONE
        audioDevices.clear()

        // Do initial selection of audio device. This setting can later be changed
        // either by adding/removing a BT or wired headset or by covering/uncovering
        // the proximity sensor.
        updateAudioDeviceState()

        // Register receiver for broadcast intents related to adding/removing a
        // wired headset.
        registerReceiver(wiredHeadsetReceiver, IntentFilter(Intent.ACTION_HEADSET_PLUG))
        Log.d(TAG, "AudioManager started")
    }

    @SuppressLint("WrongConstant")
    fun stop() {
        Log.d(TAG, "stop")
        ThreadUtils.checkIsOnMainThread()
        if (amState != AudioManagerState.RUNNING) {
            Log.e(
                TAG,
                "Trying to stop AudioManager in incorrect state: $amState"
            )
            return
        }
        amState = AudioManagerState.UNINITIALIZED
        unregisterReceiver(wiredHeadsetReceiver)

        // Restore previously stored audio states.
        setSpeakerphoneOn(savedIsSpeakerPhoneOn)
        setMicrophoneMute(savedIsMicrophoneMute)
        audioManager.mode = savedAudioMode

        // Abandon audio focus. Gives the previous focus owner, if any, focus.
        audioManager.abandonAudioFocus(audioFocusChangeListener)
        audioFocusChangeListener = null
        Log.d(TAG, "Abandoned audio focus for VOICE_CALL streams")

        audioManagerEvents = null
        Log.d(TAG, "AudioManager stopped")
    }

    /** Changes selection of the currently active audio device.  */
    private fun setAudioDeviceInternal(device: AudioDevice?) {
        Log.d(TAG, "setAudioDeviceInternal(device=$device)")
        if (audioDevices.contains(device)) {
            when (device) {
                AudioDevice.SPEAKER_PHONE -> setSpeakerphoneOn(true)
                AudioDevice.EARPIECE -> setSpeakerphoneOn(false)
                AudioDevice.WIRED_HEADSET -> setSpeakerphoneOn(false)
                else -> Log.e(TAG, "Invalid audio device selection")
            }
        }
        selectedAudioDevice = device
    }

    /**
     * Changes default audio device.
     */
    fun setDefaultAudioDevice(defaultDevice: AudioDevice?) {
        ThreadUtils.checkIsOnMainThread()
        when (defaultDevice) {
            AudioDevice.SPEAKER_PHONE -> defaultAudioDevice = defaultDevice
            AudioDevice.EARPIECE -> if (hasEarpiece()) {
                defaultAudioDevice = defaultDevice
            } else {
                defaultAudioDevice = AudioDevice.SPEAKER_PHONE
            }
            else -> Log.e(TAG, "Invalid default audio device selection")
        }
        Log.d(TAG, "setDefaultAudioDevice(device=$defaultAudioDevice)")
        updateAudioDeviceState()
    }

    /** Changes selection of the currently active audio device.  */
    fun selectAudioDevice(device: AudioDevice) {
        ThreadUtils.checkIsOnMainThread()
        if (!audioDevices.contains(device)) {
            Log.e(
                TAG,
                "Can not select $device from available $audioDevices"
            )
        }
        userSelectedAudioDevice = device
        updateAudioDeviceState()
    }

    /** Returns current set of available/selectable audio devices.  */
    fun getAudioDevices(): Set<AudioDevice> {
        ThreadUtils.checkIsOnMainThread()
        return Collections.unmodifiableSet(HashSet(audioDevices)) as Set<AudioDevice>
    }

    /** Returns the currently selected audio device.  */
    fun getSelectedAudioDevice(): AudioDevice? {
        ThreadUtils.checkIsOnMainThread()
        return selectedAudioDevice
    }

    /** Helper method for receiver registration.  */
    private fun registerReceiver(receiver: BroadcastReceiver, filter: IntentFilter) {
        apprtcContext.registerReceiver(receiver, filter)
    }

    /** Helper method for unregistration of an existing receiver.  */
    private fun unregisterReceiver(receiver: BroadcastReceiver) {
        apprtcContext.unregisterReceiver(receiver)
    }

    /** Sets the speaker phone mode.  */
    private fun setSpeakerphoneOn(on: Boolean) {
        val wasOn = audioManager.isSpeakerphoneOn
        if (wasOn == on) {
            return
        }
        audioManager.isSpeakerphoneOn = on
    }

    /** Sets the microphone mute state.  */
    private fun setMicrophoneMute(on: Boolean) {
        val wasMuted = audioManager.isMicrophoneMute
        if (wasMuted == on) {
            return
        }
        audioManager.isMicrophoneMute = on
    }

    /** Gets the current earpiece state.  */
    private fun hasEarpiece(): Boolean {
        return apprtcContext.packageManager.hasSystemFeature(PackageManager.FEATURE_TELEPHONY)
    }

    /**
     * Checks whether a wired headset is connected or not.
     * This is not a valid indication that audio playback is actually over
     * the wired headset as audio routing depends on other conditions. We
     * only use it as an early indicator (during initialization) of an attached
     * wired headset.
     */
    @Deprecated("")
    private fun hasWiredHeadset(): Boolean {
        if (Build.VERSION.SDK_INT < Build.VERSION_CODES.M) {
            return audioManager.isWiredHeadsetOn
        } else {
            val devices = audioManager.getDevices(AudioManager.GET_DEVICES_INPUTS)
            for (device: AudioDeviceInfo in devices) {
                val type = device.type
                if (type == AudioDeviceInfo.TYPE_WIRED_HEADSET) {
                    Log.d(TAG, "hasWiredHeadset: found wired headset")
                    return true
                } else if (type == AudioDeviceInfo.TYPE_USB_DEVICE) {
                    Log.d(TAG, "hasWiredHeadset: found USB audio device")
                    return true
                }
            }
            return false
        }
    }

    /**
     * Updates list of possible audio devices and make new device selection.
     */
    fun updateAudioDeviceState() {
        ThreadUtils.checkIsOnMainThread()
        Log.d(
            TAG, ("--- updateAudioDeviceState: "
                    + "wired headset=" + hasWiredHeadset)
        )
        Log.d(
            TAG, ("Device status: "
                    + "available=" + audioDevices + ", "
                    + "selected=" + selectedAudioDevice + ", "
                    + "user selected=" + userSelectedAudioDevice)
        )


        // Update the set of available audio devices.
        val newAudioDevices: MutableSet<AudioDevice?> = HashSet()

        if (hasWiredHeadset) {
            // If a wired headset is connected, then it is the only possible option.
            newAudioDevices.add(AudioDevice.WIRED_HEADSET)
        } else {
            // No wired headset, hence the audio-device list can contain speaker
            // phone (on a tablet), or speaker phone and earpiece (on mobile phone).
            newAudioDevices.add(AudioDevice.SPEAKER_PHONE)
            if (hasEarpiece()) {
                newAudioDevices.add(AudioDevice.EARPIECE)
            }
        }
        // Store state which is set to true if the device list has changed.
        var audioDeviceSetUpdated = audioDevices != newAudioDevices
        // Update the existing audio device set.
        audioDevices = newAudioDevices
        // Correct user selected audio devices if needed.
        if (hasWiredHeadset && userSelectedAudioDevice == AudioDevice.SPEAKER_PHONE) {
            // If user selected speaker phone, but then plugged wired headset then make
            // wired headset as user selected device.
            userSelectedAudioDevice = AudioDevice.WIRED_HEADSET
        }
        if (!hasWiredHeadset && userSelectedAudioDevice == AudioDevice.WIRED_HEADSET) {
            // If user selected wired headset, but then unplugged wired headset then make
            // speaker phone as user selected device.
            userSelectedAudioDevice = AudioDevice.SPEAKER_PHONE
        }


        // Update selected audio device.
        val newAudioDevice: AudioDevice?
        if (hasWiredHeadset) {
            // If a wired headset is connected, but Bluetooth is not, then wired headset is used as
            // audio device.
            newAudioDevice = AudioDevice.WIRED_HEADSET
        } else {
            // No wired headset and no Bluetooth, hence the audio-device list can contain speaker
            // phone (on a tablet), or speaker phone and earpiece (on mobile phone).
            // |defaultAudioDevice| contains either AudioDevice.SPEAKER_PHONE or AudioDevice.EARPIECE
            // depending on the user's selection.
            newAudioDevice = defaultAudioDevice
        }
        // Switch to new device but only if there has been any changes.
        if (newAudioDevice != selectedAudioDevice || audioDeviceSetUpdated) {
            // Do the required device switch.
            setAudioDeviceInternal(newAudioDevice)
            Log.d(
                TAG, ("New device status: "
                        + "available=" + audioDevices + ", "
                        + "selected=" + newAudioDevice)
            )
            if (audioManagerEvents != null) {
                // Notify a listening client that audio device has been changed.
                audioManagerEvents!!.onAudioDeviceChanged(selectedAudioDevice, audioDevices)
            }
        }
        Log.d(TAG, "--- updateAudioDeviceState done")
    }

    companion object {
        private val TAG = "AppRTCAudioManager"
        private val SPEAKERPHONE_AUTO = "auto"
        private val SPEAKERPHONE_TRUE = "true"
        private val SPEAKERPHONE_FALSE = "false"

        /** Construction.  */
        fun create(context: Context): RtcAudioManager {
            return RtcAudioManager(context)
        }
    }

    init {
        Log.d(TAG, "ctor")
        ThreadUtils.checkIsOnMainThread()
        apprtcContext = context
        audioManager = context.getSystemService(android.content.Context.AUDIO_SERVICE) as AudioManager
        wiredHeadsetReceiver = WiredHeadsetReceiver()
        amState = AudioManagerState.UNINITIALIZED
        val sharedPreferences = PreferenceManager.getDefaultSharedPreferences(context)
        useSpeakerphone = sharedPreferences.getString(
            "pref_speakerphone_key",
            "auto"
        )
        Log.d(TAG, "useSpeakerphone: $useSpeakerphone")
        if ((useSpeakerphone == SPEAKERPHONE_FALSE)) {
            defaultAudioDevice = AudioDevice.EARPIECE
        } else {
            defaultAudioDevice = AudioDevice.SPEAKER_PHONE
        }
        Log.d(TAG, "defaultAudioDevice: $defaultAudioDevice")
    }
}

9. 现在,让我们完成有关如何将其添加到 Activity 或片段的最后步骤

这就是我将其集成到 Android 应用程序的主要片段中的方式。您的实现可能会有所不同,但可以给您一个参考。

@AndroidEntryPoint
class MainFragment() : Fragment(),VideoCallClickListner,NewMessageInterface {

    private var _binding: FragmentMainBinding? = null
    private val binding get() = _binding!!

    private val viewModel by viewModels<AuthViewModel>()
    private val appointmentViewModel by viewModels<AppointmentViewModel>()
    lateinit var upcomingAppointmentsAdapter: UpcomingAppointmentsAdapter

    @Inject
    lateinit var firebaseAuth: FirebaseAuth

    lateinit var uid:String
    lateinit var targetUID:String

    private var rtcClient: RTCClient?=null
    private val gson = Gson()
    private var isMute = false
    private var isCameraPause = false
    private val rtcAudioManager by lazy { RtcAudioManager.create(requireContext()) }
    private var isSpeakerMode = true

    lateinit var webSocketManager: WebSocketManager

    override fun onCreateView(
        inflater: LayoutInflater,
        container: ViewGroup?,
        savedInstanceState: Bundle?
    ): View? {
        val rootView = inflater.inflate(R.layout.fragment_main, container, false)

        uid = firebaseAuth.uid.toString()
        return rootView
    }

    override fun onViewCreated(view: View, savedInstanceState: Bundle?) {
        super.onViewCreated(view, savedInstanceState)
        _binding = FragmentMainBinding.bind(view)

        //Ignore this

    /*    binding.btnConsult.setOnClickListener {
            findNavController().navigate(R.id.action_mainFragment_to_consultDoctor)
        } 
    */

        setupUpcomingAppointmentsRecylerView()

        init()

        getPermissionsForVideoCall()
    }
    // "Here, I'm setting up the RecyclerView for upcoming appointments. You can ignore this
    private fun setupUpcomingAppointmentsRecylerView() {
        upcomingAppointmentsAdapter = UpcomingAppointmentsAdapter(this,this)
        binding.rvUpcomingAppointments.apply {
            adapter = upcomingAppointmentsAdapter
            layoutManager =
                LinearLayoutManager(requireContext(), LinearLayoutManager.HORIZONTAL, false)
        }
        fetchUpcomingAppointments()
    }
    
    // In this function, I'm fetching the upcoming appointments. Feel free to 
   //  skip this part if it doesn't apply to your scenario.
    private fun fetchUpcomingAppointments() {
        lifecycleScope.launch {
            appointmentViewModel.upcomingAppointments.collect {
                when (it) {
                    is NetworkResult.Error -> {
                        withContext(Dispatchers.Main) {
                            Toast.makeText(
                                requireContext(),
                                "Problem in fetching Upcoming Appointments",
                                Toast.LENGTH_LONG
                            ).show()
                            binding.pbAppointment.visibility = View.GONE
                            Log.d(FETCHAPPOINTMENTS, "Error:- "+it.message.toString())
                        }
                    }

                    is NetworkResult.Loading -> {
                        withContext(Dispatchers.Main){
                            binding.pbAppointment.visibility = View.VISIBLE
                        }
                        Log.d(FETCHAPPOINTMENTS, "Loading:- "+it.message.toString())
                    }

                    is NetworkResult.Success -> withContext(Dispatchers.Main) {
                        binding.pbAppointment.visibility = View.GONE
                        upcomingAppointmentsAdapter.setData(it.data?.toList()!!)
                        Log.d(FETCHAPPOINTMENTS, "Success")
                    }

                    else -> {}
                }
            }
        }
    }
  
    private fun init(){    
        /* 
        Initializes an instance of WebSocketManager and passes this 
        (referring to the current fragment) as a callback.
        */
        webSocketManager = WebSocketManager(this) 
     
        /* 
        Calls initSocket on WebSocketManager with the user's ID (uid)
        if it is not null.
        */
        uid?.let { webSocketManager?.initSocket(it) }
        
        /*
        Initializes an instance of RTCClient with the application context, 
        user ID (uid), WebSocketManager, and a PeerConnectionObserver.
  
        Overrides onIceCandidate and onAddStream methods of the 
        PeerConnectionObserver.

        In onIceCandidate, sends ICE candidate information to the other 
        party via WebSocket.
        */
        rtcClient = RTCClient(activity?.application!!,uid!!,webSocketManager!!, object : PeerConnectionObserver() {
            override fun onIceCandidate(p0: IceCandidate?) {
                super.onIceCandidate(p0)
                rtcClient?.addIceCandidate(p0)
                val candidate = hashMapOf(
                    "sdpMid" to p0?.sdpMid,
                    "sdpMLineIndex" to p0?.sdpMLineIndex,
                    "sdpCandidate" to p0?.sdp
                )

                webSocketManager?.sendMessageToSocket(
                    MessageModel(TYPE.ICE_CANDIDATE,uid,targetUID,candidate)
                )
            }

            override fun onAddStream(p0: MediaStream?) {
                super.onAddStream(p0)
                p0?.videoTracks?.get(0)?.addSink(binding?.remoteView)
                Log.d(Constants.VIDEOCALLINGWEBRTC, "onAddStream: $p0")
            }
        })

        // Sets the default audio device to the speaker phone using 
        // rtcAudioManager.setDefaultAudioDevice.
        rtcAudioManager.setDefaultAudioDevice(RtcAudioManager.AudioDevice.SPEAKER_PHONE)

        // Switching camera
        binding?.switchCameraButton?.setOnClickListener {
            rtcClient?.switchCamera()
        }

        // Mic handling
        binding?.micButton?.setOnClickListener {
            if (isMute){
                isMute = false
                binding!!.micButton.setImageResource(R.drawable.ic_baseline_mic_off_24)
            }else{
                isMute = true
                binding!!.micButton.setImageResource(R.drawable.ic_baseline_mic_24)
            }
            rtcClient?.toggleAudio(isMute)
        }
        
        // Video button handling
        binding?.videoButton?.setOnClickListener {
            if (isCameraPause){
                isCameraPause = false
                binding!!.videoButton.setImageResource(R.drawable.ic_baseline_videocam_off_24)
            }else{
                isCameraPause = true
                binding!!.videoButton.setImageResource(R.drawable.ic_baseline_videocam_24)
            }
            rtcClient?.toggleCamera(isCameraPause)
        }
        
        // Speaker button handling
        binding?.audioOutputButton?.setOnClickListener {
            if (isSpeakerMode){
                isSpeakerMode = false
                binding!!.audioOutputButton.setImageResource(R.drawable.ic_baseline_hearing_24)
                rtcAudioManager.setDefaultAudioDevice(RtcAudioManager.AudioDevice.EARPIECE)
            }else{
                isSpeakerMode = true
                binding!!.audioOutputButton.setImageResource(R.drawable.ic_baseline_speaker_up_24)
                rtcAudioManager.setDefaultAudioDevice(RtcAudioManager.AudioDevice.SPEAKER_PHONE)

            }

        }
        
        // End call button handling
        binding?.endCallButton?.setOnClickListener {
            binding?.callLayout?.visibility = View.GONE
            binding?.incomingCallLayout?.visibility = View.GONE
            rtcClient?.endCall()
            val message = MessageModel(TYPE.CALL_ENDED, uid, targetUID, null)
            webSocketManager?.sendMessageToSocket(message)
        }

    }
    
    // Here we are requesting the necessary permissions.
    private fun getPermissionsForVideoCall() {
        PermissionX.init(this)
            .permissions(
                Manifest.permission.RECORD_AUDIO,
                Manifest.permission.CAMERA
            ).request{ allGranted, _ ,_ ->
                if (allGranted){


                } else {
                    Toast.makeText(requireContext(),"you should accept all permissions", Toast.LENGTH_LONG).show()
                }
            }
    }
    
    /*
    Requests permissions for audio and camera before initiating a video call.

    If permissions are granted, sends a WebSocket message of type START_CALL 
    to the target user (doctorUid).
    */
    override fun onclick(doctorUid: String) {
        PermissionX.init(this)
            .permissions(
                Manifest.permission.RECORD_AUDIO,
                Manifest.permission.CAMERA
            ).request{ allGranted, _ ,_ ->
                if (allGranted){
                    targetUID = doctorUid
                    webSocketManager?.sendMessageToSocket(MessageModel(
                        TYPE.START_CALL,uid,targetUID,null
                    ))
                } else {
                    Toast.makeText(requireContext(),"you should accept all permissions", Toast.LENGTH_LONG).show()
                }
            }
    }

    /*
     Handles different types of messages received via WebSocket, such as call 
     responses (CALL_RESPONSE), answers to calls (ANSWER_RECEIVED), and 
     offers for calls (OFFER_RECEIVED).
    */
    override fun onNewMessage(message: MessageModel) {
        when(message.type){
            TYPE.CALL_RESPONSE->{
                if (message.data == "user is not online"){
                    //user is not reachable
                    lifecycleScope.launch {
                        withContext(Dispatchers.Main) {
                            Toast.makeText(requireContext(),"user is not reachable", Toast.LENGTH_LONG).show()

                        }
                    }
                }else{
                    //we are ready for call, we started a call
                    lifecycleScope.launch {
                        withContext(Dispatchers.Main){
                                binding?.callLayout?.visibility = View.VISIBLE
                                binding?.apply {
                                    rtcClient?.initializeSurfaceView(binding.localView)
                                    rtcClient?.initializeSurfaceView(binding.remoteView)
                                    rtcClient?.startLocalVideo(binding.localView)
                                    rtcClient?.call(targetUID)
                                }

                        }
                    }
                }
            }
            TYPE.ANSWER_RECIEVED ->{

                val session = SessionDescription(
                    SessionDescription.Type.ANSWER,
                    message.data.toString()
                )
                rtcClient?.onRemoteSessionReceived(session)
                lifecycleScope.launch {
                    withContext(Dispatchers.Main){
                        binding?.remoteViewLoading?.visibility = View.GONE
                    }
                }
            }

            // Here we are handling the incoming call
            TYPE.OFFER_RECIEVED ->{
                Log.d("OFEERWEBRTC","Recived")

                lifecycleScope.launch {
                    withContext(Dispatchers.Main){
                        binding?.incomingCallLayout?.visibility = View.VISIBLE
                        binding?.incomingNameTV?.text = "${message.name.toString()} is calling you"
                        
                        // Accept button for accepting the call
                        binding?.acceptButton?.setOnClickListener {
                            binding?.incomingCallLayout?.visibility = View.GONE
                            binding?.callLayout?.visibility = View.VISIBLE
                            binding?.apply {
                                rtcClient?.initializeSurfaceView(localView)
                                rtcClient?.initializeSurfaceView(binding.remoteView)
                                rtcClient?.startLocalVideo(localView)
                            }
                            val session = SessionDescription(
                                SessionDescription.Type.OFFER,
                                message.data.toString()
                            )
                            Log.d("OFEERWEBRTC","UID:- ${message.name}")
                            rtcClient?.onRemoteSessionReceived(session)
                            rtcClient?.answer(message.name!!)
                            targetUID = message.name!!
                            binding!!.remoteViewLoading.visibility = View.GONE

                        }
                        /* 
                        If the user presses the reject button, a CALL_ENDED 
                        message is sent to the WebSocket, notifying the 
                        remote peer that the call has ended. 
                        */
                        binding?.rejectButton?.setOnClickListener {
                            binding?.incomingCallLayout?.visibility = View.GONE
                            val message = MessageModel(TYPE.CALL_ENDED, uid, targetUID, null)
                            webSocketManager?.sendMessageToSocket(message)
                        }

                    }
                }

            }

            // Parses and adds ICE candidates received via WebSocket.
            TYPE.ICE_CANDIDATE->{
                try {
                    val receivingCandidate = gson.fromJson(gson.toJson(message.data),
                        IceCandidateModel::class.java)
                    rtcClient?.addIceCandidate(
                        IceCandidate(receivingCandidate.sdpMid,
                            Math.toIntExact(receivingCandidate.sdpMLineIndex.toLong()),receivingCandidate.sdpCandidate)
                    )
                }catch (e:Exception){
                    e.printStackTrace()
                }
            }

            /*Here we handle the response for call ended that we 
            receive from the remote peer via WebSocket.*/
            TYPE.CALL_ENDED-> {
                lifecycleScope.launch {
                    withContext(Dispatchers.Main) {
                        Toast.makeText(requireContext(), "The call has ended", Toast.LENGTH_LONG).show()
                        rtcClient?.endCall()
                        binding.callLayout.visibility = View.GONE
                        binding?.incomingCallLayout?.visibility = View.GONE
                    }
                }
            }

            else -> {}
        }
    }

    override fun onDestroy() {
        super.onDestroy()
        _binding = null
        rtcClient?.endCall() // Close any existing WebRTC connections
        rtcClient = null
    }
}

这就是将 WebRTC 集成到 Android 应用程序中所需的全部内容,现在我们已经完成了集成过程!

作者:Yashveersingh
联系地址:https://www.linkedin.cn/incareer/in/yash30401/

本文来自作者投稿,版权归原作者所有。如需转载,请注明出处:https://www.nxrte.com/jishu/webrtc/44290.html

(0)

相关推荐

发表回复

登录后才能评论