libav

Video decoder on Cuda ffmpeg

有些话、适合烂在心里 提交于 2019-12-04 19:52:08
I starting to implement custum video decoder that utilize cuda HW decoder to generate YUV frame for next to encode it. How can I fill "CUVIDPICPARAMS" struc ??? Is it possible? My algorithm are: For get video stream packet I'm use ffmpeg-dev libs avcodec, avformat... My steps: 1) Open input file: avformat_open_input(&ff_formatContext,in_filename,nullptr,nullptr); 2) Get video stream property's: avformat_find_stream_info(ff_formatContext,nullptr); 3) Get video stream: ff_video_stream=ff_formatContext->streams[i]; 4) Get CUDA device and init it: cuDeviceGet(&cu_device,0); CUcontext cu_vid_ctx; 5

Where can I find modern tutorials for libav, ffmpeg, etc? [closed]

谁都会走 提交于 2019-12-04 16:07:36
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 2 years ago . I want to make a quick program in C that will open a video, save each frame as a ppm, and dump motion vectors. All the tutorials I can find are from almost ten years ago and call deprecated or non-existent functions. Are there any good online resources, websites, videos, or textbooks that cover a modern approach

keyframe is not a keyframe? AV_PKT_FLAG_KEY does not decode to AV_PICTURE_TYPE_I

谁都会走 提交于 2019-12-04 14:14:23
问题 After decoding a packet containing AV_PKT_FLAG_KEY in the flags, I was expecting to get I-frames, but instead I got P-frames: After a call to: avcodec_decode_video2(codecCtx, frame, &frameFinished, &packet); // mpeg2 video I print out the following as a sanity check: printf("packet flags: %d picture type: %c\n", packet.flags, av_get_picture_type_char(frame->pict_type)); Returns the output: packet flags: 1 picture type: P When I was expecting: packet flags: 1 picture type: I Where '1' == AV

Decode MP3, then increase the audio volume, and then encode the new audio

眉间皱痕 提交于 2019-12-04 11:11:48
问题 I want to first decode a MP3 audio file, and then increase the volume of the audio, and then encode it again into a new MP3 file. I want to use libavformat or libavcodec for this. Can you help me how I can do this? Any example? 回答1: You can use the "-filter" parameter with the "volume" option to set a multiplier for the audio. More info: http://ffmpeg.org/ffmpeg-filters.html#volume Since you are dealing only with MP3 files (that have only one audio track), you can use the "-af" parameter,

Libav build for Android [closed]

眉间皱痕 提交于 2019-12-04 03:12:33
It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center . Closed 7 years ago . Does anyone succeed compiling Libav for Android. I currently looking for documentation. Thanks! - FFmpeg compiled with Android NDK (go to this link) Android built-in codec is too small, so we need to FFmpeg. Android provides the NDK, as we use the C language code for FFmpeg which offers convenience. 来源: https:/

fedora 21 javafx not creating mediaplayer

时间秒杀一切 提交于 2019-12-04 03:01:38
I recently upgraded to fedora 21. I really like it, however, javafx MediaPlayer doesn't work. As per the JavaFX System Requirements Site, for a Linux distro to create A MediaPlayer I need: libavcodec53 libavformat53 I couldn't find any of these packages in the Fedora repositories (or anything about them with a google search for fedora 21 and I also checked a search for fedora 20, however I managed to get them installed from ATRpm's onto my system successfully and still no luck. I also installed ffmpeg and ffmpeg-devel and ffmpeg-libs , and also transcode, and it still throws this exception.

Set RTSP/UDP buffer size in FFmpeg/LibAV

半腔热情 提交于 2019-12-03 20:25:11
问题 Note : I'm aware ffmpeg and libav are different libraries. This is a problem common to both. Disclaimer : Duplicate of SO question marked as answered but actually didn't give a proper solution. Insufficient UDP buffer size causes broken streams for several high resolution video streams. In LibAV/FFMPEG it's possible to set the udp buffer size for udp urls (udp://...) by appending some options (buffer_size) to it. However, for RTSP urls this is not supported. These are the only solutions I've

What is the difference between AV_SAMPLE_FMT_S16P and AV_SAMPLE_FMT_S16?

家住魔仙堡 提交于 2019-12-03 19:05:26
问题 What happens when you do a conversion from AV_SAMPLE_FMT_S16P to AV_SAMPLE_FMT_S16? How is the AVFrame structure going to contain the planar and non-planar data? 回答1: AV_SAMPLE_FMT_S16P is planar signed 16 bit audio, i.e. 2 bytes for each sample which is same for AV_SAMPLE_FMT_S16 . The only difference is in AV_SAMPLE_FMT_S16 samples of each channel are interleaved i.e. if you have two channel audio then the samples buffer will look like c1 c2 c1 c2 c1 c2 c1 c2... where c1 is a sample for

Where can I find modern tutorials for libav, ffmpeg, etc? [closed]

大城市里の小女人 提交于 2019-12-03 10:12:34
I want to make a quick program in C that will open a video, save each frame as a ppm, and dump motion vectors. All the tutorials I can find are from almost ten years ago and call deprecated or non-existent functions. Are there any good online resources, websites, videos, or textbooks that cover a modern approach to doing these types of things? Randall Cook I have been working with ffmpeg and libav for several years and also have found no decent recent API-level tutorials. Sometimes I just have to dive into the source to figure out what is going on and how to use it. Also, reading the source

What's wrong with my use of timestamps/timebases for frame seeking/reading using libav (ffmpeg)?

纵然是瞬间 提交于 2019-12-03 06:23:08
So I want to grab a frame from a video at a specific time using libav for the use as a thumbnail. What I'm using is the following code. It compiles and works fine (in regards to retrieving a picture at all), yet I'm having a hard time getting it to retrieve the right picture . I simply can't get my head around the all but clear logic behind libav's apparent use of multiple time-bases per video. Specifically figuring out which functions expect/return which type of time-base. The docs were of basically no help whatsoever, unfortunately. SO to the rescue? #define ABORT(x) do {fprintf(stderr, x);