使用SFSpeechRecognizer(iOS10-beta)进行连续语音识别

16

我正在尝试在 iOS 10 beta 上使用 AVCapture 进行连续语音识别。我已经设置了 captureOutput(...) 来持续获取 CMSampleBuffers。我直接将这些缓冲区放入之前设置的 SFSpeechAudioBufferRecognitionRequest 中:

... do some setup
  SFSpeechRecognizer.requestAuthorization { authStatus in
    if authStatus == SFSpeechRecognizerAuthorizationStatus.authorized {
      self.m_recognizer = SFSpeechRecognizer()
      self.m_recognRequest = SFSpeechAudioBufferRecognitionRequest()
      self.m_recognRequest?.shouldReportPartialResults = false
      self.m_isRecording = true
    } else {
      print("not authorized")
    }
  }
.... do further setup


func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {

if(!m_AV_initialized) {
  print("captureOutput(...): not initialized !")
  return
}
if(!m_isRecording) {
  return
}

let formatDesc = CMSampleBufferGetFormatDescription(sampleBuffer)
let mediaType = CMFormatDescriptionGetMediaType(formatDesc!)
if (mediaType == kCMMediaType_Audio) {
  // process audio here
  m_recognRequest?.appendAudioSampleBuffer(sampleBuffer)
}
return
}

整个程序运行了几秒钟,然后captureOutput不再被调用。如果我注释掉appendAudioSampleBuffer(sampleBuffer)这一行,则captureOutput将在应用程序运行期间一直被调用(如预期)。显然,将样本缓冲区放入语音识别引擎中会在某种程度上阻止进一步执行。我猜可用的缓冲区在一段时间后被消耗完毕,进程因无法获取更多缓冲区而停止了?

我应该提到的是,在前两秒录制的所有内容都导致正确的识别。我不知道SFSpeech API的确切工作方式,因为苹果没有在测试文档中放置任何文本。顺便问一下:如何使用SFSpeechAudioBufferRecognitionRequest.endAudio()?

有人了解这里的情况吗?

谢谢 Chris


请查看苹果的示例代码,网址为https://developer.apple.com/library/prerelease/content/samplecode/SpeakToMe/Introduction/Intro.html,它似乎可以进行连续实时识别。 - David Williames
@DavidWilliames,那个示例代码使用的是AVAudioEngine而不是AVFoundation - SushiGrass Jacob
@chris 你是在使用委托方法还是回调方法? - SushiGrass Jacob
我使用Objective-C实现了这个项目:https://github.com/yao23/iOS_Playground/tree/master/SpeechRecognitionPractice - Yao Li
@DavidWilliames 那段示例代码是用来识别来自麦克风的语音的。你能否让那段代码能够处理来自音频文件的语音? - undefined
6个回答

19

我将Speech Recognition WWDC 开发人员演讲的SpeakToMe示例Swift代码转换为Objective-C,并且它对我有效。有关Swift,请参见https://developer.apple.com/videos/play/wwdc2016/509/,或者请参见下文以获取Objective-C。

- (void) viewDidAppear:(BOOL)animated {

_recognizer = [[SFSpeechRecognizer alloc] initWithLocale:[NSLocale localeWithLocaleIdentifier:@"en-US"]];
[_recognizer setDelegate:self];
[SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus authStatus) {
    switch (authStatus) {
        case SFSpeechRecognizerAuthorizationStatusAuthorized:
            //User gave access to speech recognition
            NSLog(@"Authorized");
            break;

        case SFSpeechRecognizerAuthorizationStatusDenied:
            //User denied access to speech recognition
            NSLog(@"SFSpeechRecognizerAuthorizationStatusDenied");
            break;

        case SFSpeechRecognizerAuthorizationStatusRestricted:
            //Speech recognition restricted on this device
            NSLog(@"SFSpeechRecognizerAuthorizationStatusRestricted");
            break;

        case SFSpeechRecognizerAuthorizationStatusNotDetermined:
            //Speech recognition not yet authorized

            break;

        default:
            NSLog(@"Default");
            break;
    }
}];

audioEngine = [[AVAudioEngine alloc] init];
_speechSynthesizer  = [[AVSpeechSynthesizer alloc] init];         
[_speechSynthesizer setDelegate:self];
}


-(void)startRecording
{
[self clearLogs:nil];

NSError * outError;

AVAudioSession *audioSession = [AVAudioSession sharedInstance];
[audioSession setCategory:AVAudioSessionCategoryRecord error:&outError];
[audioSession setMode:AVAudioSessionModeMeasurement error:&outError];
[audioSession setActive:true withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation  error:&outError];

request2 = [[SFSpeechAudioBufferRecognitionRequest alloc] init];

inputNode = [audioEngine inputNode];

if (request2 == nil) {
    NSLog(@"Unable to created a SFSpeechAudioBufferRecognitionRequest object");
}

if (inputNode == nil) {

    NSLog(@"Unable to created a inputNode object");
}

request2.shouldReportPartialResults = true;

_currentTask = [_recognizer recognitionTaskWithRequest:request2
                delegate:self];

[inputNode installTapOnBus:0 bufferSize:4096 format:[inputNode outputFormatForBus:0] block:^(AVAudioPCMBuffer *buffer, AVAudioTime *when){
    NSLog(@"Block tap!");

    [request2 appendAudioPCMBuffer:buffer];

}];

    [audioEngine prepare];
    [audioEngine startAndReturnError:&outError];
    NSLog(@"Error %@", outError);
}

- (void)speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition:(SFSpeechRecognitionResult *)result {

NSLog(@"speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition");
NSString * translatedString = [[[result bestTranscription] formattedString] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]];

[self log:translatedString];

if ([result isFinal]) {
    [audioEngine stop];
    [inputNode removeTapOnBus:0];
    _currentTask = nil;
    request2 = nil;
}
}

8
这个问题被标记为“Swift”。你为什么要在一个关于Swift的问题上发布从Swift翻译过来的Objective-C代码?! - Eric Aya
28
因为查看此问题的人可能有完全相同的关于Objective-C的问题,而另开一个完整的问题会显得多余。 - Ruben Martinez Jr.
不错的回答,只是忘了说出文本的部分。 // 被所有识别调用,包括非最终假设
  • (void)speechRecognitionTask:(SFSpeechRecognitionTask *)task didHypothesizeTranscription:(SFTranscription *)transcription { NSString * translatedString = [transcription formattedString]; NSLog(@"%@", translatedString); [self.speechSynthesizer speakUtterance:[AVSpeechUtterance speechUtteranceWithString:translatedString]];
}
- Orbitus007
生成错误:AVAudioEngineGraph所需条件为false:NULL!= tap - Nilesh Parmar

14
我已经成功使用SFSpeechRecognizer进行连续语音识别。主要的关键点是使用AVCaptureSession捕获音频并将其传输到SpeechRecognizer。抱歉,我不熟悉Swift,所以下面给出的是ObjC版本的样例代码(省略了一些UI代码,有些重要的已标注):
@interface ViewController ()<AVCaptureAudioDataOutputSampleBufferDelegate,SFSpeechRecognitionTaskDelegate>
@property (nonatomic, strong) AVCaptureSession *capture;
@property (nonatomic, strong) SFSpeechAudioBufferRecognitionRequest *speechRequest;
@end

@implementation ViewController
- (void)startRecognizer
{
    [SFSpeechRecognizer requestAuthorization:^(SFSpeechRecognizerAuthorizationStatus status) {
        if (status == SFSpeechRecognizerAuthorizationStatusAuthorized){
            NSLocale *local =[[NSLocale alloc] initWithLocaleIdentifier:@"fr_FR"];
            SFSpeechRecognizer *sf =[[SFSpeechRecognizer alloc] initWithLocale:local];
            self.speechRequest = [[SFSpeechAudioBufferRecognitionRequest alloc] init];
            [sf recognitionTaskWithRequest:self.speechRequest delegate:self];
            // should call startCapture method in main queue or it may crash
            dispatch_async(dispatch_get_main_queue(), ^{
                [self startCapture];
            });
        }
    }];
}

- (void)endRecognizer
{
    // END capture and END voice Reco
    // or Apple will terminate this task after 30000ms.
    [self endCapture];
    [self.speechRequest endAudio];
}

- (void)startCapture
{
    NSError *error;
    self.capture = [[AVCaptureSession alloc] init];
    AVCaptureDevice *audioDev = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    if (audioDev == nil){
        NSLog(@"Couldn't create audio capture device");
        return ;
    }

    // create mic device
    AVCaptureDeviceInput *audioIn = [AVCaptureDeviceInput deviceInputWithDevice:audioDev error:&error];
    if (error != nil){
        NSLog(@"Couldn't create audio input");
        return ;
    }

    // add mic device in capture object
    if ([self.capture canAddInput:audioIn] == NO){
        NSLog(@"Couldn't add audio input");
        return ;
    }
    [self.capture addInput:audioIn];
    // export audio data
    AVCaptureAudioDataOutput *audioOutput = [[AVCaptureAudioDataOutput alloc] init];
    [audioOutput setSampleBufferDelegate:self queue:dispatch_get_main_queue()];
    if ([self.capture canAddOutput:audioOutput] == NO){
        NSLog(@"Couldn't add audio output");
        return ;
    }
    [self.capture addOutput:audioOutput];
    [audioOutput connectionWithMediaType:AVMediaTypeAudio];
    [self.capture startRunning];
}

-(void)endCapture
{
    if (self.capture != nil && [self.capture isRunning]){
        [self.capture stopRunning];
    }
}

- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    [self.speechRequest appendAudioSampleBuffer:sampleBuffer];
}
// some Recognition Delegate
@end

2
在我的情况下无法调用委托方法.. 这是代码 - (void)speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition:(SFSpeechRecognitionResult *)result {NSLog(@"speechRecognitionTask:(SFSpeechRecognitionTask *)task didFinishRecognition"); NSString * translatedString = [[[result bestTranscription] formattedString] stringByTrimmingCharactersInSet:[NSCharacterSet whitespaceAndNewlineCharacterSet]]; NSLog(@"说 : %@", translatedString);} - Jagdeep
它是否在语音框架的1分钟限制之外工作?否则,当发生这种情况时,您将需要立即重新启动识别器以实现“连续”的识别器行为。 - Josh
请求授权,不要调用。 - Ramani Hitesh
你知道连续语音识别会不会导致应用被苹果拒绝吗? - MJQZ1347
通过持续的识别,你能够让它工作超过1分钟吗? - Aman pradhan
你如何使用代码在iOS设备上捕获音频文件?看起来我需要一个AVCaptureDevice来获取代码所在的iPhone设备?这可行吗? - undefined

10

这是@cube答案的Swift (3.0)实现:

import UIKit
import Speech
import AVFoundation


class ViewController: UIViewController  {
  @IBOutlet weak var console: UITextView!

  var capture: AVCaptureSession?
  var speechRequest: SFSpeechAudioBufferRecognitionRequest?
  override func viewDidLoad() {
    super.viewDidLoad()
  }
  override func viewDidAppear(_ animated: Bool) {
    super.viewDidAppear(animated)
    startRecognizer()
  }

  func startRecognizer() {
    SFSpeechRecognizer.requestAuthorization { (status) in
      switch status {
      case .authorized:
        let locale = NSLocale(localeIdentifier: "fr_FR")
        let sf = SFSpeechRecognizer(locale: locale as Locale)
        self.speechRequest = SFSpeechAudioBufferRecognitionRequest()
        sf?.recognitionTask(with: self.speechRequest!, delegate: self)
        DispatchQueue.main.async {

        }
      case .denied:
        fallthrough
      case .notDetermined:
        fallthrough
      case.restricted:
        print("User Autorization Issue.")
      }
    }

  }

  func endRecognizer() {
    endCapture()
    speechRequest?.endAudio()
  }

  func startCapture() {

    capture = AVCaptureSession()

    guard let audioDev = AVCaptureDevice.defaultDevice(withMediaType: AVMediaTypeAudio) else {
      print("Could not get capture device.")
      return
    }

    guard let audioIn = try? AVCaptureDeviceInput(device: audioDev) else {
      print("Could not create input device.")
      return
    }

    guard true == capture?.canAddInput(audioIn) else {
      print("Couls not add input device")
      return
    }

    capture?.addInput(audioIn)

    let audioOut = AVCaptureAudioDataOutput()
    audioOut.setSampleBufferDelegate(self, queue: DispatchQueue.main)

    guard true == capture?.canAddOutput(audioOut) else {
      print("Could not add audio output")
      return
    }

    capture?.addOutput(audioOut)
    audioOut.connection(withMediaType: AVMediaTypeAudio)
    capture?.startRunning()


  }

  func endCapture() {

    if true == capture?.isRunning {
      capture?.stopRunning()
    }
  }
}

extension ViewController: AVCaptureAudioDataOutputSampleBufferDelegate {
  func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
    speechRequest?.appendAudioSampleBuffer(sampleBuffer)
  }

}

extension ViewController: SFSpeechRecognitionTaskDelegate {

  func speechRecognitionTask(_ task: SFSpeechRecognitionTask, didFinishRecognition recognitionResult: SFSpeechRecognitionResult) {
    console.text = console.text + "\n" + recognitionResult.bestTranscription.formattedString
  }
}

请不要忘记在info.plist文件中添加一个值 NSSpeechRecognitionUsageDescription,否则应用程序将会崩溃。


3
你需要在 info.plist 文件中添加麦克风使用权限。 - Orbitus007
你应该在 DispatchQueue.main.async { } 中调用 startCapture() 吗? - ielyamani
@Carpsen90,我不确定。应该很容易尝试。 - M. Porooshani

7

事实上,苹果的新原生语音识别功能似乎不能自动检测语音结束的沉默(是否为bug?)。对于您的情况来说,这很有用,因为语音识别会持续近一分钟(这是苹果服务允许的最长时间)。 因此,如果您需要连续进行自动语音识别,您必须在委托触发时重新启动语音识别:

func speechRecognitionTask(task: SFSpeechRecognitionTask, didFinishSuccessfully successfully: Bool) //wether succesfully= true or not

以下是我使用的录音/语音识别SWIFT代码,它运行完美。如果您不需要计算麦克风音量平均功率的部分,请忽略它。我用它来制作动画波形。不要忘记设置SFSpeechRecognitionTaskDelegate及其委托方法,如果需要额外的代码,请告诉我。

func startNativeRecording() throws {
        LEVEL_LOWPASS_TRIG=0.01
        //Setup Audio Session
        node = audioEngine.inputNode!
        let recordingFormat = node!.outputFormatForBus(0)
        node!.installTapOnBus(0, bufferSize: 1024, format: recordingFormat){(buffer, _) in
            self.nativeASRRequest.appendAudioPCMBuffer(buffer)

 //Code to animate a waveform with the microphone volume, ignore if you don't need it:
            var inNumberFrames:UInt32 = buffer.frameLength;
            var samples:Float32 = buffer.floatChannelData[0][0]; //https://github.com/apple/swift-evolution/blob/master/proposals/0107-unsaferawpointer.md
            var avgValue:Float32 = 0;
            vDSP_maxmgv(buffer.floatChannelData[0], 1, &avgValue, vDSP_Length(inNumberFrames)); //Accelerate Framework
            //vDSP_maxmgv returns peak values
            //vDSP_meamgv returns mean magnitude of a vector

            let avg3:Float32=((avgValue == 0) ? (0-100) : 20.0)
            var averagePower=(self.LEVEL_LOWPASS_TRIG*avg3*log10f(avgValue)) + ((1-self.LEVEL_LOWPASS_TRIG)*self.averagePowerForChannel0) ;
            print("AVG. POWER: "+averagePower.description)
            dispatch_async(dispatch_get_main_queue(), { () -> Void in
                //print("VU: "+vu.description)
                var fAvgPwr=CGFloat(averagePower)
                print("AvgPwr: "+fAvgPwr.description)

                var waveformFriendlyValue=0.5+fAvgPwr //-0.5 is AvgPwrValue when user is silent
                if(waveformFriendlyValue<0){waveformFriendlyValue=0} //round values <0 to 0
                self.waveview.hidden=false
                self.waveview.updateWithLevel(waveformFriendlyValue)
            })
        }
        audioEngine.prepare()
        try audioEngine.start()
        isNativeASRBusy=true
        nativeASRTask = nativeSpeechRecognizer?.recognitionTaskWithRequest(nativeASRRequest, delegate: self)
        nativeSpeechRecognizer?.delegate=self
  //I use this timer to track no speech timeouts, ignore if not neeeded:
        self.endOfSpeechTimeoutTimer = NSTimer.scheduledTimerWithTimeInterval(utteranceTimeoutSeconds, target: self, selector:  #selector(ViewController.stopNativeRecording), userInfo: nil, repeats: false)
    }

这只是一个参数,取值范围在-1到1之间。我使用了-0.2的值来调整麦克风音量图以适应我的应用程序界面。如果你不需要绘制麦克风音量,那么可以给它一个零值,或者直接删除那部分代码。@MarkusRautopuro - Josh
avgValue 实际上是最大值而不是平均值,考虑更名。 - ielyamani
这里的 LEVEL_LOWPASS_TRIG 是什么? - aBikis
@aBikis,我不太确定,但我在网上的一个公式中看到将 Level_Lowpass_Trig 设置为0.01 对我很有用。 - Josh
如果你可以在你的Apple手表上完成一个Hello World程序,那么你只需要复制粘贴那段代码并调用方法即可。是的,Swift和Watch OS有所改变,因此你可能需要修复2或3行已弃用的代码。@lya - Josh

2

0

这在我的应用程序中完美运行。 您可以通过 saifurrahman3126@gmail.com 发送查询。 苹果不允许用户连续翻译超过一分钟。 https://developer.apple.com/documentation/speech/sfspeechrecognizer 在此处检查

“计划将音频持续时间限制为一分钟。语音识别对电池寿命和网络使用率产生相对较高的负担。为了最小化这种负担,框架会停止持续时间超过一分钟的语音识别任务。此限制类似于与键盘相关的口述。” 这是苹果在其文档中所说的。

目前,我已经发出了40秒的请求,如果您在40秒之前说话并暂停,录音将重新开始。

@objc  func startRecording() {
    
    self.fullsTring = ""
    audioEngine.reset()
    
    if recognitionTask != nil {
        recognitionTask?.cancel()
        recognitionTask = nil
    }
    
    let audioSession = AVAudioSession.sharedInstance()
    do {
        try audioSession.setCategory(.record)
        try audioSession.setMode(.measurement)
        try audioSession.setActive(true, options: .notifyOthersOnDeactivation)
        try audioSession.setPreferredSampleRate(44100.0)
        
        if audioSession.isInputGainSettable {
            let error : NSErrorPointer = nil
            
            let success = try? audioSession.setInputGain(1.0)
            
            guard success != nil else {
                print ("audio error")
                return
            }
            if (success != nil) {
                print("\(String(describing: error))")
            }
        }
        else {
            print("Cannot set input gain")
        }
    } catch {
        print("audioSession properties weren't set because of an error.")
    }
    recognitionRequest = SFSpeechAudioBufferRecognitionRequest()
    
    let inputNode = audioEngine.inputNode
    guard let recognitionRequest = recognitionRequest else {
        fatalError("Unable to create an SFSpeechAudioBufferRecognitionRequest object")
    }
    
    recognitionRequest.shouldReportPartialResults = true
    self.timer4 = Timer.scheduledTimer(timeInterval: TimeInterval(40), target: self, selector: #selector(againStartRec), userInfo: nil, repeats: false)
    
    recognitionTask = speechRecognizer.recognitionTask(with: recognitionRequest, resultHandler: { (result, error ) in
        
        var isFinal = false  //8
        
        if result != nil {
            self.timer.invalidate()
            self.timer = Timer.scheduledTimer(timeInterval: TimeInterval(2.0), target: self, selector: #selector(self.didFinishTalk), userInfo: nil, repeats: false)
            
            let bestString = result?.bestTranscription.formattedString
            self.fullsTring = bestString!
            
            self.inputContainerView.inputTextField.text = result?.bestTranscription.formattedString
            
            isFinal = result!.isFinal
            
        }
        if error == nil{
            
        }
        if  isFinal {
            
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            
            self.recognitionRequest = nil
            self.recognitionTask = nil
            isFinal = false
            
        }
        if error != nil{
            URLCache.shared.removeAllCachedResponses()
            
            self.audioEngine.stop()
            inputNode.removeTap(onBus: 0)
            
            guard let task = self.recognitionTask else {
                return
            }
            task.cancel()
            task.finish()
        }
    })
    audioEngine.reset()
    inputNode.removeTap(onBus: 0)
    
    let recordingFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 1)
    inputNode.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { (buffer, when) in
        self.recognitionRequest?.append(buffer)
    }
    
    audioEngine.prepare()
    
    do {
        try audioEngine.start()
    } catch {
        print("audioEngine couldn't start because of an error.")
    }
    
    self.hasrecorded = true
}

@objc func againStartRec(){
    
    self.inputContainerView.uploadImageView.setBackgroundImage( #imageLiteral(resourceName: "microphone") , for: .normal)
    self.inputContainerView.uploadImageView.alpha = 1.0
    self.timer4.invalidate()
    timer.invalidate()
    self.timer.invalidate()
    
    if ((self.audioEngine.isRunning)){
        
        self.audioEngine.stop()
        self.recognitionRequest?.endAudio()
        self.recognitionTask?.finish()
    }
    self.timer2 = Timer.scheduledTimer(timeInterval: 2, target: self, selector: #selector(startRecording), userInfo: nil, repeats: false)
}

@objc func didFinishTalk(){
    
    if self.fullsTring != ""{
        
        self.timer4.invalidate()
        self.timer.invalidate()
        self.timer2.invalidate()
        
        if ((self.audioEngine.isRunning)){
            self.audioEngine.stop()
            guard let task = self.recognitionTask else {
                return
            }
            task.cancel()
            task.finish()
        }
    }
}

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接