How can I extract the preceding audio (from microphone) as a buffer when silence is detected (JS)?

前端 未结 3 881
后悔当初
后悔当初 2020-12-05 11:20

I\'m using the Google Cloud API for Speech-to-text, with a NodeJS back-end. The app needs to be able to listen for voice commands, and transmit them to the back-end as a buf

3条回答
  •  不知归路
    2020-12-05 11:43

    I'm not too sure as to what exactly is being asked in the question, so this answer is only intended to give a way to detect silences in an AudioStream.


    To detect silence in an AudioStream, you can use an AudioAnalyser node, on which you will call the getByteFrequencyData method at regular intervals, and check whether there were sounds higher than than your expected level for a given time.

    You can set the threshold level directly with the minDecibels property of the AnalyserNode.

    function detectSilence(
      stream,
      onSoundEnd = _=>{},
      onSoundStart = _=>{},
      silence_delay = 500,
      min_decibels = -80
      ) {
      const ctx = new AudioContext();
      const analyser = ctx.createAnalyser();
      const streamNode = ctx.createMediaStreamSource(stream);
      streamNode.connect(analyser);
      analyser.minDecibels = min_decibels;
    
      const data = new Uint8Array(analyser.frequencyBinCount); // will hold our data
      let silence_start = performance.now();
      let triggered = false; // trigger only once per silence event
    
      function loop(time) {
        requestAnimationFrame(loop); // we'll loop every 60th of a second to check
        analyser.getByteFrequencyData(data); // get current data
        if (data.some(v => v)) { // if there is data above the given db limit
          if(triggered){
            triggered = false;
            onSoundStart();
            }
          silence_start = time; // set it to now
        }
        if (!triggered && time - silence_start > silence_delay) {
          onSoundEnd();
          triggered = true;
        }
      }
      loop();
    }
    
    function onSilence() {
      console.log('silence');
    }
    function onSpeak() {
      console.log('speaking');
    }
    
    navigator.mediaDevices.getUserMedia({
        audio: true
      })
      .then(stream => {
        detectSilence(stream, onSilence, onSpeak);
        // do something else with the stream
      })
      .catch(console.error);

    And as a fiddle since stackSnippets may block gUM.

提交回复
热议问题