Can I record the output of an <audio> without use of the microphone?

感情迁移 提交于 2019-12-13 16:16:37

问题


I have an <audio> element and I'm changing the speed, start/end bounds, and pitch. I want to see if it's possible to record the audio I hear in the browser. However I don't want to just record with the microphone because of the lower quality.

I could do the same effects server-side but I'd rather not since I'd be basically duplicating the same functionality with two different technologies.


In response to a flag vote since it's "unclear what I'm asking", I'll rephrase.

I have an <audio> element playing on the page. I have some javascript manipulating the play-rate, volume, etc. I then want my browser to record the audio as I hear it. This is not the microphone. I want to create a new audio file that is as close as possible to the one playing. If it's at 75%, then the new file will be at 75% volume.


回答1:


In supporting browsers, you could use the MediaElement.captureStream() method along with the MediaRecorder API.

But note that these technologies are still in active development and that current implementations are still full of bugs.
E.g, for your case, current stable FF will stop the rendering of the original media audio if you change its volume while recording... I didn't had time to search for a bug report on it, but anyway, this is just one of the many bugs you'll find.

// here we will save all the chunks of our record
const chunks = [];
// wait for the original media is ready
audio.oncanplay = function() {
  audio.volume = 0.5; // just for your example
  // FF still does prefix this unstable method
  var stream = audio.captureStream ? audio.captureStream() : audio.mozCaptureStream();
  // create a MediaRecorder from our stream
  var rec = new MediaRecorder(stream);
  // every time we've got a bit of data, store it
  rec.ondataavailable = e => chunks.push(e.data);
  // once everything is done
  rec.onstop = e => {
    audio.pause();
    // concatenate our chunks into one file
    let final = new Blob(chunks);
    let a = new Audio(URL.createObjectURL(final));
    a.controls = true;
    document.body.append(a);
  };
  rec.start();
  // record for 6 seconds
  setTimeout(() => rec.stop(), 6000);
  // for demo, change volume at half-time
  setTimeout(() => audio.volume = 1, 3000);
};

// FF will "taint" the stream, even if the media is served with correct CORS...
fetch("https://dl.dropboxusercontent.com/s/8c9m92u1euqnkaz/GershwinWhiteman-RhapsodyInBluePart1.mp3").then(resp => resp.blob()).then(b => audio.src = URL.createObjectURL(b));
<audio id="audio" autoplay controls></audio>

For older browsers, you could use the WebAudio API's createMediaElementSource method, to pass your audio element media through the API.
From there, you'd be able to extract raw PCM data to arrayBuffers and save it.

In following demo, I'll use recorder.js library which does greatly help for the extraction + save to wav process.

audio.oncanplay = function(){
  var audioCtx = new AudioContext();
  var source = audioCtx.createMediaElementSource(audio);
  var gainNode = audioCtx.createGain();

  gainNode.gain.value = 0.5;

  source.connect(gainNode);
  gainNode.connect(audioCtx.destination);  

  var rec = new Recorder(gainNode);

   rec.record();
  setTimeout(function(){
    gainNode.gain.value = 1;
    }, 3000);
  setTimeout(function(){
    rec.stop()
    audio.pause();
    rec.exportWAV(function(blob){
      var a = new Audio(URL.createObjectURL(blob));
      a.controls = true;
      document.body.appendChild(a);
      });
    }, 6000);
  };
<script src="https://rawgit.com/mattdiamond/Recorderjs/master/dist/recorder.js"></script>
<audio id="audio" crossOrigin="anonymous" controls src="https://dl.dropboxusercontent.com/s/8c9m92u1euqnkaz/GershwinWhiteman-RhapsodyInBluePart1.mp3" autoplay></audio>



回答2:


As Kaiido mentions in his answer, captureStream() is one way of doing it. However, that is not fully supported in Chrome and Firefox yet. MediaRecorder does also not allow for track set changes during a recording, and a MediaStream coming from captureStream() might have those (depends on the application) - thus ending the recording prematurely.

If you need a supported way of recording only audio from a media element, you can use a MediaElementAudioSourceNode, pipe that to a MediaStreamAudioDestinationNode, and pipe the stream attribute of that to MediaRecorder.

Here's an example you can use on a page with an existing audio element:

const a = document.getElementsByTagName("audio")[0];
const ac = new AudioContext();
const source = ac.createMediaElementSource(a);

// The media element source stops audio playout of the audio element.
// Hook it up to speakers again.
source.connect(ac.destination);

// Hook up the audio element to a MediaStream.
const dest = ac.createMediaStreamDestination();
source.connect(dest);

// Record 10s of audio with MediaRecorder.
const recorder = new MediaRecorder(dest.stream);
recorder.start();
recorder.ondataavailable = ev => {
  console.info("Finished recording. Got blob:", ev.data);
  a.src = URL.createObjectURL(ev.data);
  a.play();
};
setTimeout(() => recorder.stop(), 10 * 1000);

Note that neither approach works with cross-origin audio sources without a proper CORS setup, as both WebAudio and recordings would give the application the possibility to inspect audio data.



来源:https://stackoverflow.com/questions/42336604/can-i-record-the-output-of-an-audio-without-use-of-the-microphone

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!