Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia javascript javascript

Sending a MediaStream to host Server with WebRTC after it is captured by getUserMedia


You cannot upload the live stream itself while it is running. This is because it is a LIVE stream.

So, this leaves you with a handful options.

  1. Record the audio stream using one of the many recorders out there RecordRTC works fairly well. Wait until the stream is completed and then upload the file.
  2. Send smaller chuncks of recorded audio with a timer and merge them again server side. This is an example of this
  3. Send the audio packets as they occur over websockets to your server so that you can manipulate and merge them there. My version of RecordRTC does this.
  4. Make an actual peer connection with your server so it can grab the raw rtp stream and you can record the stream using some lower level code. This can easily be done with the Janus-Gateway.

As for waiting to send the stream vs sending it in chunks, it all depends on how long you are recording. If it is for a longer period of time, I would say sending the recording in chunks or actively sending audio packets over websockets is a better solution as uploading and storing larger audio files from the client side can be arduous for the client.

Firefox actually has a its own solution for recording but it is not supported in chrome so it may not work in your situation.

As an aside, the signalling method mentioned is for session build/destroy and really has nothing to do with the media itself. You would only really worry about this if you were using possibly solution number 4 shown above.


A good API for you would be MediaRecorder API but it is less supported than the Web Audio API, so you can do it using a ScriptNode or use Recorder.js (or base on it to build your own scriptnode).