这里是一个使用C#和System.Speech的完整示例。
代码可以分为两个主要部分:
配置SpeechRecognitionEngine对象(及其所需元素)
处理SpeechRecognized和SpeechHypothesized事件。
步骤1:配置SpeechRecognitionEngine对象。
_speechRecognitionEngine = new SpeechRecognitionEngine();
_speechRecognitionEngine.SetInputToDefaultAudioDevice();
_dictationGrammar = new DictationGrammar();
_speechRecognitionEngine.LoadGrammar(_dictationGrammar);
_speechRecognitionEngine.RecognizeAsync(RecognizeMode.Multiple);
此时您的对象已准备好从麦克风中开始转录音频。但是,您需要处理一些事件才能实际获得结果。
步骤2:处理SpeechRecognitionEngine事件
```_speechRecognitionEngine.SpeechRecognized -= new EventHandler(SpeechRecognized);
_speechRecognitionEngine.SpeechHypothesized -= new EventHandler(SpeechHypothesizing);
_speechRecognitionEngine.SpeechRecognized += new EventHandler(SpeechRecognized);
_speechRecognitionEngine.SpeechHypothesized += new EventHandler(SpeechHypothesizing);```
```private void SpeechHypothesizing(object sender,
SpeechHypothesizedEventArgs e) {
///引擎实时结果
string realTimeResults = e.Result.Text; }
private void SpeechRecognized(object sender, SpeechRecognizedEventArgs
e) {
///引擎最终答案
string finalAnswer = e.Result.Text; }```
这就是全部内容。如果您想使用预先录制的.wav文件而不是麦克风,则应使用```_speechRecognitionEngine.SetInputToWaveFile(pathToTargetWavFile);``` 代替 ```_speechRecognitionEngine.SetInputToDefaultAudioDevice();```
这些类中有许多不同的选项,值得更详细地探索。
http://ellismis.com/2012/03/17/converting-or-transcribing-audio-to-text-using-c-and-net-system-speech/