video-streaming

OpenTok, How to switch publisher source from screen capturer to camera and vice versa?

坚强是说给别人听的谎言 提交于 2021-02-08 15:18:32
问题 I am trying to implement the feature that allows to switch between screen sharing capture and camera in the middle of the video session in an Android app. I use OpenTok SDK 2.5 for Android. I have researched OpenTok examples (OpenTok samples) and figured they show only one feature per program sample. Question: Should code supply two Publishers (one equipped with Camera and one with Screensharing capturer) and switch them, for example session.unpublish(); if (currentIsCamera) { session.publish

iOS — How to change video resolution in webRTC?

浪子不回头ぞ 提交于 2021-02-08 15:16:27
问题 I am trying to change local video resolution in webRTC. I used following method to create local video tracker: -(RTCVideoTrack *)createLocalVideoTrack { RTCVideoTrack *localVideoTrack = nil; RTCMediaConstraints *mediaConstraints = [[RTCMediaConstraints alloc] initWithMandatoryConstraints:nil optionalConstraints:nil]; RTCAVFoundationVideoSource *source = [self.factory avFoundationVideoSourceWithConstraints:mediaConstraints]; localVideoTrack = [self.factory videoTrackWithSource:source trackId:@

No data written to stdin or stderr from ffmpeg

*爱你&永不变心* 提交于 2021-02-08 10:38:49
问题 I have a dummy client that is suppose to simulate a video recorder, on this client i want to simulate a video stream; I have gotten so far that i can create a video from bitmap images that i create in code. The dummy client is a nodejs application running on an Raspberry Pi 3 with the latest version of raspian lite. In order to use the video I have created, I need to get ffmpeg to dump the video to pipe:1. The problem is that I need the -f rawvideo as a input parameter, else ffmpeg can't

Extract frames as images from an RTMP stream in real-time

给你一囗甜甜゛ 提交于 2021-02-08 10:22:03
问题 I am streaming short videos (4 or 5 seconds) encoded in H264 at 15 fps in VGA quality from different clients to a server using RTMP which produced an FLV file. I need to analyse the frames from the video as images as soon as possible so I need the frames to be written as PNG images as they are received. Currently I use Wowza to receive the streams and I have tried using the transcoder API to access the individual frames and write them to PNGs. This partially works but there is about a second

How to get frame data in AppRTC iOS app for video modifications?

眉间皱痕 提交于 2021-02-07 20:31:52
问题 I am currently trying to make some modifications to the incoming WebRTC video stream in the AppRTC app for iOS in Swift (which in turn is based on this Objective-C version). To do so, I need access to the data which is stored in the frame objects of class RTCI420Frame (which is a basic class for the Objective-C implementation of libWebRTC). In particular, I need an array of bytes: [UInt8] and Size of the frames. This data is to be used for further processing & addition of some filters. The

Comparing Media Source Extensions (MSE) with WebRTC

南楼画角 提交于 2021-02-07 07:15:29
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

Comparing Media Source Extensions (MSE) with WebRTC

北战南征 提交于 2021-02-07 07:14:52
问题 What are the fundamental differences between Media Source Extensions and WebRTC? If I may project my own understanding for a moment. WebRTC includes an the RTCPeerConnection which handles getting streams from Media Streams and passing them into a protocol for streaming to connected peers of the application. It seems under the hood WebRTC abstracting a lot of the bigger issues like codecs and transcoding. Would this be a correct assessment? Where does Media Source Extensions fit into things? I

Restrict S3 object access to requests from a specific domain

痴心易碎 提交于 2021-02-06 09:22:09
问题 I have video files in S3 and a simple player that loads the files via an src attribute. I want the videos to only be viewed through my site and not directly via the S3 URL (which might be visible in the source code of the page or accessible via right clicking) Looking through the AWS docs it seems the only way i can do this via HTTP is to append a signature and expiration date to a query but this isn't sufficient. Other access restrictions refer to AWS users. How do i get around this or

Http Media Streaming Server

老子叫甜甜 提交于 2021-02-05 20:16:45
问题 I have developed video streaming application with RED5 media server(RTMP). Instead of RTMP need to stream live video through HTTP. Any open source HTTP media server?? Is any open source server which supports both RTMP and HTTP ? Thanks in advance. 回答1: Primarily, HTTP and RTMP are different protocols. You won't serve RTMP inside the HTTP. (Although you can do this for a tunneling solution). Exist several ways to do HTTP Streaming. Such as HLS, DASH, Smooth and Progresive Download. If you need

Http Media Streaming Server

自闭症网瘾萝莉.ら 提交于 2021-02-05 20:16:44
问题 I have developed video streaming application with RED5 media server(RTMP). Instead of RTMP need to stream live video through HTTP. Any open source HTTP media server?? Is any open source server which supports both RTMP and HTTP ? Thanks in advance. 回答1: Primarily, HTTP and RTMP are different protocols. You won't serve RTMP inside the HTTP. (Although you can do this for a tunneling solution). Exist several ways to do HTTP Streaming. Such as HLS, DASH, Smooth and Progresive Download. If you need