Is there a way to use the Web Audio API to sample audio faster than real-time?

前端 未结 2 1134
春和景丽
春和景丽 2020-11-30 03:47

I\'m playing around with the Web Audio API & trying to find a way to import an mp3 (so therefore this is only in Chrome), and generate a waveform of it on a canvas. I c

相关标签:
2条回答
  • 2020-11-30 03:51

    There is a really amazing 'offline' mode of the Web Audio API that allows you to pre-process an entire file through an audio context and then do something with the result:

    var context = new webkitOfflineAudioContext();
    
    var source = context.createBufferSource();
    source.buffer = buffer;
    source.connect(context.destination);
    source.noteOn(0);
    
    context.oncomplete = function(e) {
      var audioBuffer = e.renderedBuffer;
    };
    
    context.startRendering();
    

    So the setup looks exactly the same as the real-time processing mode, except you set up the oncomplete callback and the call to startRendering(). What you get back in e.redneredBuffer is an AudioBuffer.

    0 讨论(0)
  • 2020-11-30 03:59

    I got this to work using OfflineAudioContext using the following code. The complete example here shows how to use it to compute the FFT magnitudes for a linear chirp. Once you have the concept of hooking the nodes together, you can do just about anything with it offline.

    function fsin(freq, phase, t) {
      return Math.sin(2 * Math.PI * freq * t + phase)
    }
    
    function linearChirp(startFreq, endFreq, duration, sampleRate) {
      if (duration === undefined) {
        duration = 1; // seconds
      }
      if (sampleRate === undefined) {
        sampleRate = 44100; // per second
      }
      var numSamples = Math.floor(duration * sampleRate);
      var chirp = new Array(numSamples);
      var df = (endFreq - startFreq) / numSamples;
      for (var i = 0; i < numSamples; i++) {
        chirp[i] = fsin(startFreq + df * i, 0, i / sampleRate);
      }
      return chirp;
    }
    
    function AnalyzeWithFFT() {
      var numChannels = 1; // mono
      var duration = 1; // seconds
      var sampleRate = 44100; // Any value in [22050, 96000] is allowed
      var chirp = linearChirp(10000, 20000, duration, sampleRate);
      var numSamples = chirp.length;
    
      // Now we create the offline context to render this with.
      var ctx = new OfflineAudioContext(numChannels, numSamples, sampleRate);
    
      // Our example wires up an analyzer node in between source and destination.
      // You may or may not want to do that, but if you can follow how things are
      // connected, it will at least give you an idea of what is possible.
      //
      // This is what computes the spectrum (FFT) information for us.
      var analyser = ctx.createAnalyser();
    
      // There are abundant examples of how to get audio from a URL or the
      // microphone. This one shows you how to create it programmatically (we'll
      // use the chirp array above).
      var source = ctx.createBufferSource();
      var chirpBuffer = ctx.createBuffer(numChannels, numSamples, sampleRate);
      var data = chirpBuffer.getChannelData(0); // first and only channel
      for (var i = 0; i < numSamples; i++) {
        data[i] = 128 + Math.floor(chirp[i] * 127); // quantize to [0,256)
      }
      source.buffer = chirpBuffer;
    
      // Now we wire things up: source (data) -> analyser -> offline destination.
      source.connect(analyser);
      analyser.connect(ctx.destination);
    
      // When the audio buffer has been processed, this will be called.
      ctx.oncomplete = function(event) {
        console.log("audio processed");
        // To get the spectrum data (e.g., if you want to plot it), you use this.
        var frequencyBins = new Uint8Array(analyser.frequencyBinCount);
        console.log(analyser.getByteFrequencyData(frequencyBins);
        // You can also get the result of any filtering or any other stage here:
        console.log(event.renderedBuffer);
      };
    
      // Everything is now wired up - start the source so that it produces a
      // signal, and tell the context to start rendering.
      //
      // oncomplete above will be called when it is done.
      source.start();
      ctx.startRendering();
    }
    
    0 讨论(0)
提交回复
热议问题