iOS麦克风无法工作或在从锁定屏幕接听电话时未通过webrtc发送语音。

19

我正在使用Webrtc和Callkit进行通话。当应用程序在前台时,一切正常,但如果屏幕被锁定并且我接听电话,则仅我的一侧能听到声音(我可以听到声音,但我的声音没有传送)。

当用户进入应用程序时,所有问题都得到解决。

所有后台设置和功能都已正确设置。

<key>UIBackgroundModes</key>
   <array>
        <string>audio</string>
        <string>fetch</string>
        <string>remote-notification</string>
        <string>voip</string>
    </array>

我尝试使用RTCAudioSession和AVAudioSession进行音频配置,但在这两种情况下它的工作方式都相同。

解决方案: 我以前是将mediastream放在RTCPeerConnection中,现在我改为添加RTCMediaStreamTracks


我在思考同样的问题,即iOS电话接通时锁屏后使用WebRTC时音频无法工作的问题。 - LS_
2
@Signo 这个问题不完全相同,因为音频在两侧都无法工作。尽管如此,我仍然尝试了这个答案中提供的解决方案,但是无法修复它。 - Elene Akhvlediani
@EleneAkhvlediani 你找到解决方案了吗? - codeGeek
@codeGeek 我将webrtc更新到最新版本,并且过去我将mediastream放入RTCPeerConnection中,现在我改为添加RTCMediaStreamTracks(用于音频和视频)。 - Elene Akhvlediani
@EleneAkhvlediani 我尝试按照您所说的做了,但iPhone仍然没有声音。 - codeGeek
与WebRTC和Callkit相关的问题相同,当应用程序进入后台时,麦克风停止工作,只有扬声器在工作。 - Rakshitha Muranga Rodrigo
1个回答

1
请注意,我分享我的代码是关于我的需求,并且我分享供参考。您需要根据自己的需求进行更改。
当您收到voip通知时,请创建您的webrtc处理类的新事件,并将以下两行添加到代码块中,因为从voip通知启用音频会话会失败。
RTCAudioSession.sharedInstance().useManualAudio = true
RTCAudioSession.sharedInstance().isAudioEnabled = false 

didReceive方法;

func pushRegistry(_ registry: PKPushRegistry, didReceiveIncomingPushWith payload: PKPushPayload, for type: PKPushType, completion: @escaping () -> Void) {
               let state = UIApplication.shared.applicationState
               
        
     
                   if(payload.dictionaryPayload["hangup"] == nil && state != .active
                   ){
                       
               
                     Globals.voipPayload = payload.dictionaryPayload as! [String:Any] // I pass parameters to Webrtc handler via Global singleton to create answer according to sdp sent by payload.
                        
                       RTCAudioSession.sharedInstance().useManualAudio = true
                       RTCAudioSession.sharedInstance().isAudioEnabled = false
                       
                     
                      
                     Globals.sipGateway = SipGateway() // my Webrtc and Janus gateway handler class
                    
                       
                     Globals.sipGateway?.configureCredentials(true) // I check janus gateway credentials stored in Shared preferences and initiate websocket connection and create peerconnection 
to my janus gateway which is signaling server for my environment
                    
                       
                  initProvider() //Crating callkit provider
                       
                       self.update.remoteHandle = CXHandle(type: .generic, value:String(describing: payload.dictionaryPayload["caller_id"]!))
                          Globals.callId = UUID()
             
                       let state = UIApplication.shared.applicationState
                       
                      
                          Globals.provider.reportNewIncomingCall(with:Globals.callId , update: self.update, completion: { error in
                           
                           
                          })
                       
                
               }
              
           }
    
        
        func  initProvider(){
            let config = CXProviderConfiguration(localizedName: "ulakBEL")
            config.iconTemplateImageData = UIImage(named: "ulakbel")!.pngData()
            config.ringtoneSound = "ringtone.caf"
                   // config.includesCallsInRecents = false;
                    config.supportsVideo = false
            
            Globals.provider = CXProvider(configuration:config )
            Globals.provider.setDelegate(self, queue: nil)
             update = CXCallUpdate()
             update.hasVideo = false
             update.supportsDTMF = true
      
        }
    

将您的didActivate和didDeActive委托函数修改如下:

func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
       print("CallManager didActivate")
       RTCAudioSession.sharedInstance().audioSessionDidActivate(audioSession)
       RTCAudioSession.sharedInstance().isAudioEnabled = true
      // self.callDelegate?.callIsAnswered()
    
 
   }

   func provider(_ provider: CXProvider, didDeactivate audioSession: AVAudioSession) {
       print("CallManager didDeactivate")
RTCAudioSession.sharedInstance().audioSessionDidDeactivate(audioSession)
       RTCAudioSession.sharedInstance().isAudioEnabled = false
    
 
   }

在Webrtc处理程序类中配置媒体发送器和音频会话。
private func createPeerConnection(webRTCCallbacks:PluginHandleWebRTCCallbacksDelegate) {
   
        let rtcConfig =  RTCConfiguration.init()
        rtcConfig.iceServers = server.iceServers
        rtcConfig.bundlePolicy = RTCBundlePolicy.maxBundle
        rtcConfig.rtcpMuxPolicy = RTCRtcpMuxPolicy.require
        rtcConfig.continualGatheringPolicy = .gatherContinually
        rtcConfig.sdpSemantics = .planB
        
        let constraints = RTCMediaConstraints(mandatoryConstraints: nil,
                                                 optionalConstraints: ["DtlsSrtpKeyAgreement":kRTCMediaConstraintsValueTrue])
           
        pc = sessionFactory.peerConnection(with: rtcConfig, constraints: constraints, delegate: nil)
        self.createMediaSenders()
        self.configureAudioSession()
        
   
        
      if webRTCCallbacks.getJsep() != nil{
        handleRemoteJsep(webrtcCallbacks: webRTCCallbacks)
        }
      
    }

mediaSenders;

private func createMediaSenders() {
        let streamId = "stream"
        
        // Audio
        let audioTrack = self.createAudioTrack()
        self.pc.add(audioTrack, streamIds: [streamId])
        
        // Video
      /*  let videoTrack = self.createVideoTrack()
        self.localVideoTrack = videoTrack
        self.peerConnection.add(videoTrack, streamIds: [streamId])
        self.remoteVideoTrack = self.peerConnection.transceivers.first { $0.mediaType == .video }?.receiver.track as? RTCVideoTrack
        
        // Data
        if let dataChannel = createDataChannel() {
            dataChannel.delegate = self
            self.localDataChannel = dataChannel
        }*/
    }

  private func createAudioTrack() -> RTCAudioTrack {
        let audioConstrains = RTCMediaConstraints(mandatoryConstraints: nil, optionalConstraints: nil)
        let audioSource = sessionFactory.audioSource(with: audioConstrains)
        let audioTrack = sessionFactory.audioTrack(with: audioSource, trackId: "audio0")
        return audioTrack
    }

音频会话;

private func configureAudioSession() {
        self.rtcAudioSession.lockForConfiguration()
        do {
            try self.rtcAudioSession.setCategory(AVAudioSession.Category.playAndRecord.rawValue)
            try self.rtcAudioSession.setMode(AVAudioSession.Mode.voiceChat.rawValue)
        } catch let error {
            debugPrint("Error changeing AVAudioSession category: \(error)")
        }
        self.rtcAudioSession.unlockForConfiguration()
    }

请注意,由于我使用了回调和委托,代码中包含了委托和回调块。您可以相应地忽略它们!
参考:您还可以在link查看示例。

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接