iOS11中AVAudioPlayerNode预定缓冲区和音频路由更改

4
我发现在iOS 9/10和iOS 11上,当音频路由发生变化时(例如插入耳机),对于AVAudioPlayerNode上未来预定的缓冲区,会出现不同的行为。有没有人遇到过类似的问题,并且如何解决?请注意,我在两周前在苹果的AVFoundation支持论坛上报告了这个问题,但没有得到任何回复。
下面是展示此问题的代码 - 首先简要说明一下:该代码是一个简单的循环,重复地在将来的某个时间安排缓冲区播放。通过调用“runSequence”方法启动此过程,在将来的某个时间安排音频缓冲区以播放,并将完成回调设置为嵌套方法“audioCompleteHandler”。完成回调再次调用“runSequence”方法,安排另一个缓冲区并保持进程永远运行。在执行完成处理程序时除外,总是安排了缓冲区。各种位置的“trace”方法是仅用于调试打印的内部方法,可以忽略。
在音频路由更改通知处理程序(handleAudioRouteChange)中,当新设备可用时(case .newDeviceAvailable),代码重新启动引擎和播放器,重新激活音频会话并调用“runSequence”以使循环重新开始。
在iOS 9.3.5(iPhone 5C)和iOS 10.3.3(iPhone 6)上都可以正常工作,但在iOS 11.1.1(iPad Air)上失败。失败的性质是AVAudioPlayerNode不播放音频,而是立即调用完成处理程序。这会导致失控的情况。如果我删除启动循环的那行代码(如代码中所示),则在iOS 11.1.1上可以正常工作,但在iOS 9.3.5和iOS 10.3.3上失败。此故障与另一种故障不同:音频停止,并且在调试器中,我可以看到循环未循环。
因此,一个可能的解释是,在iOS 9.x和iOS 10.x下,当音频路由发生变化时,未来预定的缓冲区会被取消预定,而在iOS 11.x下,未来预定的缓冲区不会被取消预定。
这引出了两个问题: 1.有没有人遇到类似的行为,解决方法是什么? 2.有没有人能够指向描述引擎、播放器和会话在音频路由变化(或音频中断)发生时的确切状态的文档?
private func runSequence() {

    // For test ony
    var timeBaseInfo = mach_timebase_info_data_t()
    mach_timebase_info(&timeBaseInfo)
    // End for test only

    let audioCompleteHandler = { [unowned self] in
        DispatchQueue.main.async {
            trace(level: .skim, items: "Player: \(self.player1.isPlaying), Engine: \(self.engine.isRunning)")
            self.player1.stop()
            switch self.runStatus {
            case .Run:
                self.runSequence()
            case .Restart:
                self.runStatus = .Run
                self.tickSeq.resetSequence()
                //self.updateRenderHostTime()
                self.runSequence()
            case .Halt:
                self.stopEngine()
                self.player1.stop()
                self.activateAudioSession(activate: false)
            }
        }
    }

    // Schedule buffer...
    if self.engine.isRunning {
        if let thisElem: (buffer: AVAudioPCMBuffer, duration: Int) = tickSeq.next() {
            self.player1.scheduleBuffer(thisElem.buffer, at: nil, options: [], completionHandler: audioCompleteHandler)
            self.player1.prepare(withFrameCount: thisElem.buffer.frameLength)
            self.player1.play(at: AVAudioTime(hostTime: self.startHostTime))
            self.startHostTime += AVAudioTime.hostTime(forSeconds: TimeInterval(Double(60.0 / Double(self.model.bpm.value)) * Double(thisElem.duration)))
            trace(level: .skim, items:
                "Samples: \(thisElem.buffer.frameLength)",
                "Time: \(mach_absolute_time() * (UInt64(timeBaseInfo.numer) / UInt64(timeBaseInfo.denom))) ",
                "Sample Time: \(player1.lastRenderTime!.hostTime)",
                "Play At: \(self.startHostTime) ",
                "Player: \(self.player1.isPlaying)",
                "Engine: \(self.engine.isRunning)")
        }
        else {
        }
    }
}


@objc func handleAudioRouteChange(_ notification: Notification) {

    trace(level: .skim, items: "Route change: Player: \(self.player1.isPlaying) Engine: \(self.engine.isRunning)")
    guard let userInfo = notification.userInfo,
        let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt,
        let reason = AVAudioSessionRouteChangeReason(rawValue:reasonValue) else { return }

    trace(level: .skim, items: audioSession.currentRoute, audioSession.mode)
    trace(level: .none, items: "Reason Value: \(String(describing: userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt)); Reason: \(String(describing: AVAudioSessionRouteChangeReason(rawValue:reasonValue)))")

    switch reason {
    case .newDeviceAvailable:
        trace(level: .skim, items: "In handleAudioRouteChange.newDeviceAvailable")
        for output in audioSession.currentRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
            startEngine()
            player1.play()
            activateAudioSession(activate: true)
            //updateRenderHostTime()
            runSequence() // <<--- Problem: works for iOS9,10; fails on iOS11. Remove it and iOS9,10 fail, works on iOS11
        }
    case .oldDeviceUnavailable:
        trace(level: .skim, items: "In handleAudioRouteChange.oldDeviceUnavailable")
        if let previousRoute =
            userInfo[AVAudioSessionRouteChangePreviousRouteKey] as? AVAudioSessionRouteDescription {
            for output in previousRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
                player1.stop()
                stopEngine()
                tickSeq.resetSequence()
                DispatchQueue.main.async {
                    if let pp = self.playPause as UIButton? { pp.isSelected = false }
                }
           }
        }
1个回答

6

所以,通过进一步的挖掘和测试解决了这个问题:

  • iOS 9/10和iOS 11在AVAudioSession发出路由变更通知时的行为有所不同。在通知处理程序中,对于iOS 9/10,引擎状态大约90%的时间都不是正在运行的(engine.isRunning == false),而对于iOS 11,引擎状态始终都是正在运行的(engine.isRunning == true)。
  • 对于iOS 9/10中有10%的时间表明引擎正在运行(engine.isRunning == true),实际上并非如此。无论engine.isRunning显示什么,引擎都没有运行。
  • 因为引擎已经停止,在iOS 9/10中之前准备好的音频已被释放,只有重新安排文件或缓冲到引擎停止的采样点才能启动音频。不幸的是,在引擎停止时,你不能找到当前的采样时间(播放器返回nil),所以你需要:

    • 启动引擎
    • 获取采样时间,并在持久属性中累加它(+=)
    • 停止播放器
    • 从刚刚获取的采样时间开始重新安排音频(并进行准备)
    • 启动播放器
  • iOS 9/10中耳机插入情况(.newDeviceAvailable)和拔出情况(.oldDeviceUnavailable)的引擎状态相同,因此你需要对拔出情况进行类似上述的操作(需要累加采样时间以便你可以从停止的地方重新开始音频,因为player.stop()会将采样时间重置为0)

  • 在iOS 11中不需要这些操作,但是下面的代码适用于iOS 9/10和11,因此最好以相同的方式处理所有版本。

以下代码在我的测试设备上适用于iOS 9.3.5(iPhone 5C)、iOS 10.3.3(iPhone 6)和iOS 11.1.1(iPad Air)(但我仍然困扰于找不到正确处理路由变更的先前评论,肯定有成百上千的人遇到过这个问题...??通常,当我不能找到关于一个主题的先前评论时,我就会认为我做错了什么或者没有理解...哦好吧...):

@objc func handleAudioRouteChange(_ notification: Notification) {

    guard let userInfo = notification.userInfo,
        let reasonValue = userInfo[AVAudioSessionRouteChangeReasonKey] as? UInt,
        let reason = AVAudioSessionRouteChangeReason(rawValue:reasonValue) else { return }

    switch reason {
    case .newDeviceAvailable:

        for output in audioSession.currentRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
            headphonesConnected = true
        }

        startEngine()   // Do this regardless of whether engine.isRunning == true

        if let lrt = player.lastRenderTime, let st = player.playerTime(forNodeTime: lrt)?.sampleTime {
            playSampleOffset += st  // Accumulate so that multiple inserts/removals move the play point forward
            stopPlayer()
            scheduleSegment(file: playFile, at: nil, player: player, start: playSampleOffset, length: AVAudioFrameCount(playFile.length - playSampleOffset))
            startPlayer()
        }
        else {
            // Unknown problem with getting sampleTime; reset engine, player(s), restart as appropriate
        }

    case .oldDeviceUnavailable:
        if let previousRoute =
            userInfo[AVAudioSessionRouteChangePreviousRouteKey] as? AVAudioSessionRouteDescription {
            for output in previousRoute.outputs where output.portType == AVAudioSessionPortHeadphones {
                headphonesConnected = false
            }
        }

        startEngine()   // Do this regardless of whether engine.isRunning == true

        if let lrt = player.lastRenderTime, let st = player.playerTime(forNodeTime: lrt)?.sampleTime  {
            playSampleOffset += st  // Accumulate...
            stopPlayer()
            scheduleSegment(file: playFile, at: nil, player: player, start: playSampleOffset, length: AVAudioFrameCount(playFile.length - playSampleOffset))
            startPlayer()   // Test only, in reality don't restart here; set play control to allow user to start audio
        }
        else {
            // Unknown problem with getting sampleTime; reset engine, player(s), restart as appropriate
        }

...


网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接