h.264

Publish webcam feed to Flash Media Server

家住魔仙堡 提交于 2019-12-09 04:31:27
I have a fairly high-end webcam (snc-rz25n) that I need to rebroadcast using the Flash Media Server. I can get the picture as MPEG-4 (not h.264). So I need to transcode to h.264 and publish at multiple bitrates to FMS. The only solution I have been able to come up with thus far is to transcode the stream using ffmpeg and then also use ffmpeg to downconvert the stream (for the multiple bitrates) and then publish all of these transcoded streams to FMS via custom Java code (using Red5). Surely there is a better way. Flash Live Media Encoder is not going to work. The camera is on the network, not

H.264 stream header

微笑、不失礼 提交于 2019-12-09 00:50:30
问题 I have corrupted video stream with this header / parameters in the beginning. 00 00 00 01 67 64 00 1E AC D9 40 B0 33 FB C0 44 00 00 03 00 04 00 00 03 00 C8 3C 58 B6 58 00 00 00 01 68 EB EC B2 2C I’m trying to figure out the actual values, but all I have guessed is that 67 – AVC / H264 64 00 - High Profile 1E – Level 30 (in decimal) Does anybody know what other bytes stand for? At least, how to calculate video dimensions (Width x Height). I thought it should be decimal numbers but apparently

How to play raw NAL units in Android exoplayer?

人盡茶涼 提交于 2019-12-08 18:50:20
问题 I know that exoplayer has support for RTSP, but I need C++ code that works on players from lots of OSs, so I need to parse the RTP packet in C++ to NAL units before passing to exoplayer I found a way to decode RTP packets using live555 and extract its NAL units. According to ExoPlayer's documentation: Components common to all ExoPlayer implementations are: A MediaSource that defines the media to be played, loads the media, and from which the loaded media can be read. A MediaSource is injected

OpenCV IP camera application crashes [h264 @ 0xxxxx] missing picture in access unit

ⅰ亾dé卋堺 提交于 2019-12-08 16:23:41
问题 I have an opencv application in cpp. It captures video stream and saves it to video files with the simple constructs from opencv. It works perfectly with my webcam. But, it crashes maybe after about ten seconds, while I run it to capture the stream from IP Camara. My compile command is: g++ -O3 IP_Camera_linux.cpp -o IP_Camera `pkg-config --cflags --libs opencv` My Stream from IP cam is accessed like this: const string Stream = "rtsp://admin:xxxx@192.168.0.101/"; It does run perfectly, shows

How to deal with cv::VideoCapture decode errors?

旧街凉风 提交于 2019-12-08 15:35:15
问题 I'm streaming H264 content from an IP camera using the VideoCapture from OpenCV (compiled with ffmpeg support). So far things work ok, but every once in a while I get decoding errors (from ffmpeg I presume): [h264 @ 0x103006400] mb_type 137 in I slice too large at 26 10 [h264 @ 0x103006400] error while decoding MB 26 10 [h264 @ 0x103006400] negative number of zero coeffs at 25 5 [h264 @ 0x103006400] error while decoding MB 25 5 [h264 @ 0x103006400] cbp too large (421) at 35 13 [h264 @

How to publish selfmade stream with ffmpeg and c++ to rtmp server?

不羁岁月 提交于 2019-12-08 14:22:32
Have a nice day to you, people! I am writing an application for Windows that will capture the screen and send the stream to Wowza server by rtmp (for broadcasting). My application use ffmpeg and Qt. I capture the screen with WinApi, convert a buffer to YUV444(because it's simplest) and encode frame as described at the file decoding_encoding.c (from FFmpeg examples): /////////////////////////// //Encoder initialization /////////////////////////// avcodec_register_all(); codec=avcodec_find_encoder(AV_CODEC_ID_H264); c = avcodec_alloc_context3(codec); c->width=scr_width; c->height=scr_height; c-

Gstreamer - stream h264 video from Logitech c920 over tcp

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-08 10:14:06
问题 I am trying to stream video from Logitech c920 which outputs h264 directly. The sending side is a Raspberry Pi and the receiving side is a Windows 7 PC. Using udp this works flawlessly in Gstreamer: Sender: gst-launch-1.0 -v v4l2src device=/dev/video0 ! \ video/x-h264,width=1280,height=720,framerate=30/1 ! h264parse ! rtph264pay \ pt=127 config-interval=4 ! udpsink host=$myip port=$myport Receiver: gst-launch-1.0 -e -v udpsrc port=5001 ! ^ application/x-rtp, payload=96 ! ^ rtpjitterbuffer ! ^

What library is best for a H264 video stream streamed from an RTSP Server?

≡放荡痞女 提交于 2019-12-08 07:35:33
问题 Anyone know of an efficient, feature rich and C# .NET supported library for capturing H264 encoded video streamed from an RTSP server? I'm developing a security application that needs to buffer video for a set amount of time (e.g. 30 seconds), and then when prompted (via an external trigger) record for n seconds after; so that what lead to the event and what happened after is captured. So far I've found the LeadTools Multimedia SDK (which can buffer real time streams with pause/ play/fast

x264 rate control modes

独自空忆成欢 提交于 2019-12-08 07:27:26
问题 Recently I am reading the x264 source codes. Mostly, I concern the RC part. And I am confused about the parameters --bitrate and --vbv-maxrate . When bitrate is set, the CBR mode is used in frame level. If you want to start the MB level RC, the parameters bitrate , vbv-maxrate and vbv-bufsize should be set. But I don't know the relationship between bitrate and vbv-maxrate . What is the criterion of the real encoding result when bitrate and vbv-maxrate are both set? And what is the recommended

Read H264 SPS & PPS NAL bytes using libavformat APIs

拟墨画扇 提交于 2019-12-08 06:48:05
问题 How to read H264 SPS & PPS NAL bytes using libavformat APIs? I tried reading video data to 'AVPacket' structure using "av_read_frame(input_avFormatContext, &avPkt)" API, from a .mp4 video (codec is h264) file. I dumped avPkt->data to a file. But 1st frame read is an IDR frame. File generated using "ffmpeg -i video.mp4 video.h264" will contain SPS & PPS in the starting before start of IDR. I want to extract raw .h264 video from .mp4 file and dump it in SPS,PPS, IDR, P1, P2... order. I want to