h.264

FFmpeg conversion FROM UYVY422 TO YUV420P

爱⌒轻易说出口 提交于 2019-12-07 06:52:18
问题 I have raw video in UYVY422 format and I want to convert it YUV420p. I'am executing that command() ffmpeg -y -r 25.0 -f rawvideo -s 1920x1080 -pix_fmt uyvy422 -i input.avi -pix_fmt yuv420p -f avi -r 25 -s 1920x1080 output.avi and my output video seems to float(right side of video start to be present at left edge and it is moving from left to right) Has anyone got any idea about what I am doing wrong? I was trying to set output video to raw format, but it didnt work...\ 回答1: ffmpeg -y -r 25.0

Best way to implement HTML5 video

时光怂恿深爱的人放手 提交于 2019-12-07 06:11:14
问题 I understand that HTML5 video is way more complicated than its proponents would like us to believe. Safari uses the proprietary H.264 codec, whereas Firefox, Chrome and Opera all support the open-source Theora. Internet Explorer doesn't support either, so needs a fallback, such as .mov or Flash. I found a superb guide somewhere that put together a step-by-step guide for HTML5 on all these browsers, but I can't find it anywhere. Very annoying :( What's the best way to implement HTML5 video so

Android Widevine HLS/DRM support

﹥>﹥吖頭↗ 提交于 2019-12-07 05:23:47
问题 It will be soon 2 years since Google acquires the Widevine company that provides the DRM support for protecting e.g. the HLS H.264/AAC streams. According to the http://www.widevine.com/ not only Android, but also iPhone/iPad and game consoles like Wii or PS3 ares supported. Does anybody experience with the Android Widevine DRM? Regards, STeN 回答1: you must be certified by google to work with the Widevine APIs. the certification is called CWIP and requires paying a substantial sum and going

What library is best for a H264 video stream streamed from an RTSP Server?

落花浮王杯 提交于 2019-12-07 02:55:33
Anyone know of an efficient, feature rich and C# .NET supported library for capturing H264 encoded video streamed from an RTSP server? I'm developing a security application that needs to buffer video for a set amount of time (e.g. 30 seconds), and then when prompted (via an external trigger) record for n seconds after; so that what lead to the event and what happened after is captured. So far I've found the LeadTools Multimedia SDK (which can buffer real time streams with pause/ play/fast forward/etc functionality), but its libraries and documentation for C# are lacking; with most of the

Decode h264 rtsp with ffmpeg and separated AVCodecContext

我是研究僧i 提交于 2019-12-07 02:10:47
问题 I need some help with decodein rtsp stream of video. I get it from AXIS IP-camera. I use ffmpeg library for it. It is neccessary to create AVCodecContext separately, not from AVFormatContext->streams[...]->codec; So i create AVCodec, AVCOdecContext and try to init them. AVCodec *codec=avcodec_find_decoder(codec_id); if(!codec) { qDebug()<<"FFMPEG failed to create codec"<<codec_id; return false; //--> } AVCodecContext *context=avcodec_alloc_context3(codec); if(!context) { qDebug()<<"FFMPEG

media foundation H264 decoder not working properly

a 夏天 提交于 2019-12-07 02:09:27
I'm creating an application for video conferencing using media foundation and I'm having an issue decoding the H264 video frames I receive over the network. The Design Currently my network source queues a token on every request sample, unless there is an available stored sample. If a sample arrives over the network and no token is available the sample is stored in a linked list. Otherwise it is queued with the MEMediaSample event. I also have the decoder set to low latency. My Issue When running the topology using my network source I immediately see the first frame rendered to the screen. I

Slow H264 1080P@60fps Decoding on Android Lollipop 5.0.2

我是研究僧i 提交于 2019-12-06 22:17:37
I'm developing a JAVA RTP Streaming App for a company project, which should be capable of joining the Multicast Server and receive the RTP Packets.Later I use the H264 Depacketizer to recreate the a complete frame from the NAL FU (Keep append the data until End Bit & Marker Bit set ) I want to decode and display a raw h264 video byte stream in Android and therefore I'm currently using the MediaCodec classes with Hardware Decoder configured. The Application is Up and running for the Jeallybean (API 17). Various Resolutions which I need to decodes are : 480P at 30/60 FPS 720P/I at 30/60 FPS

How to extract motion vectors from H.264 AVC CMBlockBufferRef after VTCompressionSessionEncodeFrame

一世执手 提交于 2019-12-06 13:51:53
问题 I'm trying read or understand CMBlockBufferRef representation of H.264 AVC 1/30 frame. The buffer and the encapsulating CMSampleBufferRef is created by using VTCompressionSessionRef . https://gist.github.com/petershine/de5e3d8487f4cfca0a1d H.264 data is represented as AVC memory buffer, CMBlockBufferRef from the compressed sample. Without fully decompressing again , I'm trying to extract motion vectors or predictions from this CMBlockBufferRef . I believe that for the fastest performance,

Using ffmpeg to combine small mp4 chunks?

我是研究僧i 提交于 2019-12-06 11:23:41
I'm trying to convert batches of png images into a single mp4 x264 video using ffmpeg. The conversion, for reasons I won't go into, converts groups of frames into short mp4 chunks and then I want to take those chunks and merge them into the final video at a specific fps (in this case 30fps). My understanding of ffmpeg and the x264 options is too limited, and while I can produce the individual mp4 chunks from the source png frames without trouble, the final merge always ends up duplicating and/or dropping frames especially with very short chunks (< 4 frames). The conversion from png to mp4 uses

How are access units aligned within PES packets in Apple's HLS?

早过忘川 提交于 2019-12-06 10:53:40
问题 Does Apple specify this? How many access units should one put in a PES packet payload? Also, I'm wondering which prefix start codes (if any) are present in PES packets. I assume that the one preceding the first NAL unit within an access unit is useless and mustn't be put. Right? I'd like to know how it's done specifically in HLS - not necessarily any other MPEG-2 TS application. 回答1: I'd like to know how it's done specifically in HLS - not necessarily any other MPEG-2 TS application. HLS is a