如何使用AudioWorklet获取麦克风音量

21

我希望能够在JavaScript中持续读取麦克风音量。许多现有的解决方案在StackOverflow上(见这里这里这里)都使用了2014年就已经被弃用的BaseAudioContext.createScriptProcessor()

我想在我的项目中使用未来性的代码,有人能分享一个使用新的AudioWorkletNode读取麦克风音量的现代最小示例吗?


另一个选择是使用setTimeout,这样您仍然可以使用AnalyserNode,并且可以轻松地启动和停止音量读取。[在此答案中查看它的运行情况。] (https://dev59.com/lFwX5IYBdhLWcg3w2SZq#64650826) - Minding
1个回答

44

让我们了解一些需要知道的要点:

  • 所有这些改变都是为了避免延迟,创建自己的线程,也就是在音频渲染线程(AudioWorkletGlobalScope)上运行。
  • 这种新的实现方式由两部分组成:AudioWorkletProcessor和AudioWorkletNode。
  • AudioWorkletNode至少需要两个东西:一个AudioContext对象和处理器名称作为字符串。可以通过新的Audio Worklet对象的addModule()调用来加载和注册处理器定义。
  • Worklet API包括AudioWorklet仅在安全上下文中可用。在这种情况下,我们可以使用localhost,但需要知道这一点。
  • 我们需要至少从AudioWorkletProcessor到AudioWorkletNode通信当前值,或者以本例为例,音量以采取任何行动。
  • 需要使用navigator.getUserMedia访问计算机的麦克风。
/** Declare a context for AudioContext object */
let audioContext
// Creating a list of colors for led
const ledColor = [
    "#064dac",
    "#064dac",
    "#064dac",
    "#06ac5b",
    "#15ac06",
    "#4bac06",
    "#80ac06",
    "#acaa06",
    "#ac8b06",
    "#ac5506",
]
let isFirtsClick = true
let listeing = false

function onMicrophoneDenied() {
    console.log('denied')
}

/**
 * This method updates leds
 * depending the volume detected
 * 
 * @param {Float} vol value of volume detected from microphone
 */
function leds(vol) {
    let leds = [...document.getElementsByClassName('led')]
    let range = leds.slice(0, Math.round(vol))

    for (var i = 0; i < leds.length; i++) {
        leds[i].style.boxShadow = "-2px -2px 4px 0px #a7a7a73d, 2px 2px 4px 0px #0a0a0e5e";
        leds[i].style.height = "22px"
    }

    for (var i = 0; i < range.length; i++) {
        range[i].style.boxShadow = `5px 2px 5px 0px #0a0a0e5e inset, -2px -2px 1px 0px #a7a7a73d inset, -2px -2px 30px 0px ${ledColor[i]} inset`;
        range[i].style.height = "25px"
    }
}

/**
 * Method used to create a comunication between
 * AudioWorkletNode, Microphone and AudioWorkletProcessor
 * 
 * @param {MediaStream} stream If user grant access to microphone, this gives you
 * a MediaStream object necessary in this implementation
 */
async function onMicrophoneGranted(stream) {
    // Instanciate just in the first time
    // when button is pressed
    if (isFirtsClick) {
        // Initialize AudioContext object
        audioContext = new AudioContext()

        // Adding an AudioWorkletProcessor
        // from another script with addModule method
        await audioContext.audioWorklet.addModule('vumeter-processor.js')

        // Creating a MediaStreamSource object
        // and sending a MediaStream object granted by 
        // the user
        let microphone = audioContext.createMediaStreamSource(stream)

        // Creating AudioWorkletNode sending
        // context and name of processor registered
        // in vumeter-processor.js
        const node = new AudioWorkletNode(audioContext, 'vumeter')

        // Listing any message from AudioWorkletProcessor in its
        // process method here where you can know
        // the volume level
        node.port.onmessage  = event => {
            let _volume = 0
            let _sensibility = 5 // Just to add any sensibility to our ecuation
            if (event.data.volume)
                _volume = event.data.volume;
            leds((_volume * 100) / _sensibility)
        }

        // Now this is the way to
        // connect our microphone to
        // the AudioWorkletNode and output from audioContext
        microphone.connect(node).connect(audioContext.destination)

        isFirtsClick = false
    }

    // Just to know if button is on or off
    // and stop or resume the microphone listening
    let audioButton = document.getElementsByClassName('audio-control')[0]
    if (listeing) {
        audioContext.suspend()
        audioButton.style.boxShadow = "-2px -2px 4px 0px #a7a7a73d, 2px 2px 4px 0px #0a0a0e5e"
        audioButton.style.fontSize = "25px"
    } else {
        audioContext.resume()
        audioButton.style.boxShadow = "5px 2px 5px 0px #0a0a0e5e inset, -2px -2px 1px 0px #a7a7a73d inset"
        audioButton.style.fontSize = "24px"
    }

    listeing = !listeing
}

function activeSound () {
    // Tell user that this
    // program wants to use
    // the microphone
    try {
        navigator.getUserMedia = navigator.getUserMedia || navigator.webkitGetUserMedia || navigator.mozGetUserMedia;
        
        navigator.getUserMedia(
            { audio: true, video: false },
            onMicrophoneGranted,
            onMicrophoneDenied
        );
    } catch(e) {
        alert(e)
    }
}

document.getElementById('audio').addEventListener('click', () => {
    activeSound()
})

本节中的实现可以让您知道您麦克风的音量:

const SMOOTHING_FACTOR = 0.8;
const MINIMUM_VALUE = 0.00001;

// This is the way to register an AudioWorkletProcessor
// it's necessary to declare a name, in this case
// the name is "vumeter"
registerProcessor('vumeter', class extends AudioWorkletProcessor {

  _volume
  _updateIntervalInMS
  _nextUpdateFrame

  constructor () {
    super();
    this._volume = 0;
    this._updateIntervalInMS = 25;
    this._nextUpdateFrame = this._updateIntervalInMS;
    this.port.onmessage = event => {
      if (event.data.updateIntervalInMS)
        this._updateIntervalInMS = event.data.updateIntervalInMS;
    }
  }

  get intervalInFrames () {
    return this._updateIntervalInMS / 1000 * sampleRate;
  }

  process (inputs, outputs, parameters) {
    const input = inputs[0];

    // Note that the input will be down-mixed to mono; however, if no inputs are
    // connected then zero channels will be passed in.
    if (input.length > 0) {
      const samples = input[0];
      let sum = 0;
      let rms = 0;

      // Calculated the squared-sum.
      for (let i = 0; i < samples.length; ++i)
        sum += samples[i] * samples[i];

      // Calculate the RMS level and update the volume.
      rms = Math.sqrt(sum / samples.length);
      this._volume = Math.max(rms, this._volume * SMOOTHING_FACTOR);

      // Update and sync the volume property with the main thread.
      this._nextUpdateFrame -= samples.length;
      if (this._nextUpdateFrame < 0) {
        this._nextUpdateFrame += this.intervalInFrames;
        this.port.postMessage({volume: this._volume});
      }
    }
    
    return true;
  }
});

最后,这是可以显示检测到的音量的 HTML 代码:

<div class="container">
    <span>Microphone</span>
    <div class="volumen-wrapper">
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
                
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
        <div class="led"></div>
    </div>

    <div class="control-audio-wrapper">
        <div id="audio" class="audio-control">&#127908;</div>
    </div>
</div>
<script type="module" src="./index.js"></script>

这是结果 输入图片说明

以下是我在 codepen 上的实现

来源:


2
感谢您提供的出色解释和示例代码。目前应该使用MediaDevices.getUserMedia()而不是getUserMedia() - zahra_oveyedzade
1
我需要进行那个更改,我已经阅读了一些相关内容。谢谢你的更新!@zahra_oveyedzade - forgived
文档中提到的audioContext.audioWorklet.addModule()在Safari和iOS上不受支持。是否有解决方案可以支持这两个平台?https://developer.mozilla.org/en-US/docs/Web/API/Worklet/addModule - Thinkal VB
灵敏度似乎有点武断。当我像示例中一样保持在5时,我会得到被剪辑的录音,但它尚未变为红色…… 真实计算是否有问题?为什么不能是真实音量,而没有这个*100/灵敏度的影响? - Roel
你忘记定义sampleRate了。 - Ali Mert Çakar
1
@AliMertÇakar sampleRateAudioWorkletGlobalScope 中定义,该类可以访问它。 - IanB

网页内容由stack overflow 提供, 点击上面的
可以查看英文原文,
原文链接