webrtc

Windows: Feed a Webrtc Stream to a Virtual Driver

对着背影说爱祢 提交于 2020-06-29 06:43:30
问题 I have a virtual webcam installed on Win 10 and I am using this project link to get the remote webrtc stream. My goal is instead of sending the stream to video element I need to send it to my virtual driver. So here is the html element setTheirVideo : function (stream) { var video = document.getElementById('their-video'); if (typeof video.srcObject == "object") { video.srcObject = stream; } else { video.src = URL.createObjectURL(stream); } }, Where & How would I set the stream to be sent to

Windows: Feed a Webrtc Stream to a Virtual Driver

穿精又带淫゛_ 提交于 2020-06-29 06:43:03
问题 I have a virtual webcam installed on Win 10 and I am using this project link to get the remote webrtc stream. My goal is instead of sending the stream to video element I need to send it to my virtual driver. So here is the html element setTheirVideo : function (stream) { var video = document.getElementById('their-video'); if (typeof video.srcObject == "object") { video.srcObject = stream; } else { video.src = URL.createObjectURL(stream); } }, Where & How would I set the stream to be sent to

Custom video capture native webrtc

◇◆丶佛笑我妖孽 提交于 2020-06-29 04:38:18
问题 According to webrtc discuss group topic at google cricket::VideoCapture will be deprecated soon. To customize a video source, we should implement VideoTrackSourceInterface. I tried implementing the Interface and didn't work. I implemented the interface an when I have a frame then called the event OnFrame(const webrtc::VideoFrame& frame) as following: void StreamSource::OnFrame(const webrtc::VideoFrame& frame) { rtc::scoped_refptr<webrtc::VideoFrameBuffer buffer(frame.video_frame_buffer());

WebRTC stuck in connecting state

会有一股神秘感。 提交于 2020-06-28 04:03:31
问题 I have successfully communicated the offer, answer and ice candidates for a WebRTC connection from A to B. At this point, the connection is stuck in the "connecting" state. The initiator (A) seems to timeout or something after a while and switch to the "failed" state, whereas its remote (B) is staying in the "connecting" state permanently. Any help would be very appreciated. Creation of peer (A and B): let peer = new RTCPeerConnection({ iceServers: [ { urls: [ "stun:stun1.l.google.com:19302",

WebRTC “perfect negotiation” issues

时光毁灭记忆、已成空白 提交于 2020-06-28 03:59:13
问题 I have been trying to implement WebRTC "perfect negotiation" as explained in this blog post. Unfortunately, I'm very often triggering errors on the polite side of the conversation (in the following linked code, this is the last peer to join). The two most frequent errors are InvalidStateError: Cannot rollback local description in stable caused by this line and ONN DOMException: "Cannot set local offer in state have-remote-offer" triggered here and caused by this line. That second error is

addIceCandidate with argument null result in error

痴心易碎 提交于 2020-06-27 17:54:28
问题 I am trying to learn WebRTC, I had achieved connecting two RTCPeerConnection in same page and I am now attempting to separate them into two separate pages and connects them. However, after code are written and exchanged offer and answer, I noticed addIceCandidate() on initiator.html will always throw this with null argument Error at addIceCandidate from queue: TypeError: Failed to execute 'addIceCandidate' on 'RTCPeerConnection': Candidate missing values for both sdpMid and sdpMLineIndex at

python webrtc voice activity detection is wrong

让人想犯罪 __ 提交于 2020-06-27 11:17:36
问题 I need to do voice activity detection as a step to classify audio files. Basically, I need to know with certainty if a given audio has spoken language. I am using py-webrtcvad, which I found in git-hub and is scarcely documented: https://github.com/wiseman/py-webrtcvad Thing is, when I try it on my own audio files, it works fine with the ones that have speech but keeps yielding false positives when I feed it with other types of audio (like music or bird sound), even if I set aggressiveness at

Converting a Bitmap to a WebRTC VideoFrame

我怕爱的太早我们不能终老 提交于 2020-06-26 12:06:23
问题 I'm working on a WebRTC based app for Android using the native implementation (org.webrtc:google-webrtc:1.0.24064), and I need to send a series of bitmaps along with the camera stream. From what I understood, I can derive from org.webrtc.VideoCapturer and do my rendering in a separate thread, and send video frames to the observer; however it expects them to be YUV420 and I'm not sure I'm doing the correct conversion. This is what I currently have: CustomCapturer.java Are there any examples I

WebRTC connects on local connection, but fails over internet

[亡魂溺海] 提交于 2020-06-26 04:14:34
问题 I have some test code running that I'm using to try to learn the basics of WebRTC. This test code works on a LAN, but not over the internet, even if I use a TURN server (one side shows a status "checking" and the other "failed"). I can see that there are ice candidates in the SDPs, so I don't need to send them explicitly (right?). This writes a lot of debug info to the console, so I can tell my signalling server is working. I'm stuck - what do I need to do differently in my code to enable it

Understanding SFU's, TURN servers in WebRTC

左心房为你撑大大i 提交于 2020-06-25 05:48:06
问题 If I am building a WebRTC app and using a Selective Forwarding Unit media server, does this mean that I will have no need for STUN / TURN servers? From what I understand, STUN servers are used for clients to discover their public IP / port, and TURN servers are used to relay data between clients when they are unable to connect directly to each other via STUN. My question is, if I deploy my SFU media server with a public address, does this eliminate the need for STUN and TURN servers? Since