PROBLEM:
WebRTC gives us peer-to-peer video/audio connections. It is perfect for p2p calls, hangouts. But what about broadcasting (one-to-many, for
As @MuazKhan noted above:
https://github.com/muaz-khan/WebRTC-Scalable-Broadcast
works in chrome, and no audio-broadcast yet, but it seems to be a 1st Solution.
A Scalable WebRTC peer-to-peer broadcasting demo.
This module simply initializes socket.io and configures it in a way that single broadcast can be relayed over unlimited users without any bandwidth/CPU usage issues. Everything happens peer-to-peer!
This should definitely be possible to complete.
Others are also able to achieve this: http://www.streamroot.io/
I'm developing WebRTC broadcasting system using the Kurento Media Server. Kurento Supports several kinds of streaming protocol such as RTSP, WebRTC, HLS. It works as well in term of real-time and scaling.
Hence, Kurento doesn't support RTMP which is used in Youtube or Twitch now. One of the problem with me is the number of user concurrent with this.
Hope it help.
There is the solution of peer-assisted delivery, meaning the approach is hybrid. Both server and peers help distribute the resource. That's the approach peer5.com and peercdn.com have taken.
If we're talking specifically about live broadcast it'll look something like this:
Following such a model can save up to ~90% of the server's bandwidth depending on bitrate of the live stream and the collaborative uplink of the viewers.
disclaimer: the author is working at Peer5
As it was pretty much covered here, what you are trying to do here is not possible with plain, old-fashionned WebRTC (strictly peer-to-peer). Because as it was said earlier, WebRTC connections renegotiate encryption keys to encrypt data, for each session. So your broadcaster (B) will indeed need to upload its stream as many times as there are attendees.
However, there is a quite simple solution, which works very well: I have tested it, it is called a WebRTC gateway. Janus is a good example. It is completely open source (github repo here).
This works as follows: your broadcaster contacts the gateway (Janus) which speaks WebRTC. So there is a key negotiation: B transmits securely (encrypted streams) to Janus.
Now, when attendees connect, they connect to Janus, again: WebRTC negotiation, secured keys, etc. From now on, Janus will emit back the streams to each attendees.
This works well because the broadcaster (B) only uploads its stream once, to Janus. Now Janus decodes the data using its own key and have access to the raw data (that it, RTP packets) and can emit back those packets to each attendee (Janus takes care of encryption for you). And since you put Janus on a server, it has a great upload bandwidth, so you will be able to stream to many peer.
So yes, it does involve a server, but that server speaks WebRTC, and you "own" it: you implement the Janus part so you don't have to worry about data corruption or man in the middle. Well unless your server is compromised, of course. But there is so much you can do.
To show you how easy it is to use, in Janus, you have a function called incoming_rtp()
(and incoming_rtcp()
) that you can call, which gives you a pointer to the rt(c)p packets. You can then send it to each attendee (they are stored in sessions
that Janus makes very easy to use). Look here for one implementation of the incoming_rtp() function, a couple of lines below you can see how to transmit the packets to all attendees and here you can see the actual function to relay an rtp packet.
It all works pretty well, the documentation is fairly easy to read and understand. I suggest you start with the "echotest" example, it is the simplest and you can understand the inner workings of Janus. I suggest you edit the echo test file to make your own, because there is a lot of redundant code to write, so you might as well start from a complete file.
Have fun! Hope I helped.
My masters is focused on the development of a hybrid cdn/p2p live streaming protocol using WebRTC. I've published my first results at http://bem.tv
Everything is open source and I'm seeking for contributors! :-)
The answer from Angel Genchev seems to be correct, however, there is a theoretical architecture, that allows low-latency broadcasting via WebRTC. Imagine B (broadcaster) streams to A1 (attendee 1). Then A2 (attendee 2) connects. Instead of streaming from B to A2, A1 starts streaming video being received from B to A2. If A1 disconnects then A2 starts receiving from B.
This architecture could work if there are no latencies and connection timeouts. So theoretically it is right, but not practically.
At the moment I am using server side solution.