问题
I would like to record the Audio stream from my Angular Web App to my Asp.net Core Api.
I think, using SignalR and its websockets it a good way to do that.
With this typescript code, I m able to get a MediaStream:
import { HubConnection } from '@aspnet/signalr';
[...]
private stream: MediaStream;
private connection: webkitRTCPeerConnection;
@ViewChild('video') video;
[...]
navigator.mediaDevices.getUserMedia({ audio: true })
  .then(stream => {
    console.trace('Received local stream');
    this.video.srcObject = stream;
    this.stream = stream;
    var _hubConnection = new HubConnection('[MY_API_URL]/webrtc');
    this._hubConnection.send("SendStream", stream);
  })
  .catch(function (e) {
    console.error('getUserMedia() error: ' + e.message);
  });
And I handle the stream in the .NetCore API with
  public class MyHub: Hub{
    public void SendStream(object o)
    {
    }
}
But when I cast o to System.IO.Stream, I got a null.
When I read the documentation of WebRTC, I saw information about RTCPeerConnection. IceConnection ... Do I need that?
How can I stream the audio from a WebClient to Asp.netCore API using SignalR? Documentation? GitHub?
Thanks for your help
回答1:
I found the way to get access to the microphone stream and transmit it to the server, here is the code:
  private audioCtx: AudioContext;
  private stream: MediaStream;
  convertFloat32ToInt16(buffer:Float32Array) {
    let l = buffer.length;
    let buf = new Int16Array(l);
    while (l--) {
      buf[l] = Math.min(1, buffer[l]) * 0x7FFF;
    }
    return buf.buffer;
  }
  startRecording() {
    navigator.mediaDevices.getUserMedia({ audio: true })
      .then(stream => {
        this.audioCtx = new AudioContext();
        this.audioCtx.createMediaStreamSource(stream);
        this.audioCtx.onstatechange = (state) => { console.log(state); }
        var scriptNode = this.audioCtx.createScriptProcessor(4096, 1, 1);
        scriptNode.onaudioprocess = (audioProcessingEvent) => {
          var buffer = [];
          // The input buffer is the song we loaded earlier
          var inputBuffer = audioProcessingEvent.inputBuffer;
          // Loop through the output channels (in this case there is only one)
          for (var channel = 0; channel < inputBuffer.numberOfChannels; channel++) {
            console.log("inputBuffer:" + audioProcessingEvent.inputBuffer.getChannelData(channel));
            var chunk = audioProcessingEvent.inputBuffer.getChannelData(channel);
            //because  endianness does matter
            this.MySignalRService.send("SendStream", this.convertFloat32ToInt16(chunk));
          }
        }
        var source = this.audioCtx.createMediaStreamSource(stream);
        source.connect(scriptNode);
        scriptNode.connect(this.audioCtx.destination);
        this.stream = stream;
      })
      .catch(function (e) {
        console.error('getUserMedia() error: ' + e.message);
      });
  }
  stopRecording() {
    try {
      let stream = this.stream;
      stream.getAudioTracks().forEach(track => track.stop());
      stream.getVideoTracks().forEach(track => track.stop());
      this.audioCtx.close();
    }
    catch (error) {
      console.error('stopRecording() error: ' + error);
    }
  }
Next step will be to convert my int32Array to a wav file.
sources which helped me:
- https://subvisual.co/blog/posts/39-tutorial-html-audio-capture-streaming-to-node-js-no-browser-extensions/
 - https://medium.com/@yushulx/learning-how-to-capture-and-record-audio-in-html5-6fe68a769bf9
 
Note: I didnt add the code on how to configure SignalR, it was not the purpose here.
来源:https://stackoverflow.com/questions/50220281/webrtc-and-asp-netcore