video-processing

Exception: “ERROR com.xuggle.ferry.JNILibraryLoader - Could not load library” occur

别来无恙 提交于 2019-12-06 03:37:28
I read about library-xuggle-xuggler from Stackoverflow and I have added the below jars as slf4j-api-1.6.4.jar, commons-cli-1.1.jar, logback-core-1.0.0.jar, logback-classic-1.0.0.jar, xuggle-utils-1.20.688.jar, xuggle‑xuggler-5.2.jar sequentially in java class path(order and export tab) as mentioned in the post. But unfortunately i am still getting exception bellow yet: 14:14:00.941 [main] ERROR com.xuggle.ferry.JNILibraryLoader - Could not load library: xuggle; version: 5; Visit http://www.xuggle.com/xuggler/faq/ to find common solutions to this problem Exception in thread "main" java.lang

Color tracking using EMGUcv

烂漫一生 提交于 2019-12-06 02:54:50
I am trying to make an colored object tracker which uses a binary image and blob detector to follow the target sort of like this: https://www.youtube.com/watch?v=9qky6g8NRmI . However I can not figure out how the ThresholdBinary() method work and if it is even the right one. Here is a relevant bit of the code: cam._SmoothGaussian(3); blobDetector.Update(cam); Image<Bgr,byte> binaryImage = cam.ThresholdBinary(new Bgr(145,0,145),new Bgr(0,0,0)); Image<Gray,byte> binaryImageGray = binaryImage.Conver<Gray,byte>(); blobTracker.Process(cam, binaryImageGray); foreach (MCvBlob blob in blobTracker) {

Deinterlacing in ffmpeg

半世苍凉 提交于 2019-12-06 02:38:22
I've followed the tutorial here to load video files into a C program. But the frames aren't deinterlaced. From what I've seen, the ffmpeg executable supports a -deinterlace switch. How to I do this in code? What library/functions should I read about? You have to manually call avpicture_deinterlace to deinterlace each decoded frame. The function definition can be found here . It will basically look like this (using the variables from the first page of the tutorial): avcodec_decode_video(pCodecCtx, pFrame, &frameFinished, packet.data, packet.size); if(frameFinished) { avpicture_deinterlace(

How to capture and process live activity from another application in Python?

笑着哭i 提交于 2019-12-05 21:41:22
I'm a computer science student, and as a personal project, I'm interested in building software that can watch and produce useful information about Super Nintendo games being run on a local emulator. This might be things like current health, current score, etc., (anything legible on the screen). The emulator runs in windowed form (I'm using SNES9x) and so I wouldn't need to capture every pixel on the screen, and I'd only have to capture about 30fps. I've looked into some libraries like FFMPEG and OpenCV, but so far what I've seen leads me to believe I have to have pre-recorded renderings of the

FFmpeg concatenation, no Audio in Final Output

房东的猫 提交于 2019-12-05 19:15:53
I have the following command working in ffmpeg, which adds 1 second of a black frame to the beginning of the video. However, I lose the audio from the original video in the output video. How can I adjust the command to make sure the original audio stays with the final output, or better yet, there is 1 second of "blank" audio at the beginning so it matches the new output video. ffmpeg -i originalvideo -f lavfi -i color=c=black:s=1920x1080:r=25:sar=1/1 -filter_complex "[0:v] setpts=PTS-STARTPTS [main]; [1:v] trim=end=1,setpts=PTS-STARTPTS [pre]; [pre][main] concat=n=2:v=1:a=0 [out]" -map "[out]"

ffmpeg: make a copy from a decoded frame (AVFrame)

最后都变了- 提交于 2019-12-05 18:59:35
I want to make a backup frame (AVFrame) from a special frame(let's say pic ). So, I have written AVFrame* bkf = avcodec_alloc_frame(); memcpy(bkf,pic,sizeof(AVFrame)); bkf->extended_data = pic->extended_data; bkf->linesize[0] = pic->linesize[0]; memcpy(bkf->data, pic->data, sizeof(pic->data)); bkf->reordered_opaque = pic->reordered_opaque; bkf->sample_rate = pic->sample_rate; bkf->channel_layout = pic->channel_layout; bkf->pkt_pts = pic->pkt_pts; bkf->pkt_pos = pic->pkt_pos; bkf->width = pic->width; bkf->format = pic ->format; to copy pic to bkf . But after running, I saw a lot of distortion.

How to get video from camera Intent and save it to a directory?

百般思念 提交于 2019-12-05 18:15:30
Is it possible to have code similar to the following that does the same for video? if (resultCode == Activity.RESULT_CANCELED) { // camera mode was canceled. } else if (resultCode == Activity.RESULT_OK) { // Took a picture, use the downsized camera image provided by default Bitmap cameraPic = (Bitmap) data.getExtras().get("data"); if (cameraPic != null) { try { savePic(cameraPic); } catch (Exception e) { Log.e(DEBUG_TAG, "saveAvatar() with camera image failed.", e); } } What I am trying to do is to be able to take a video using the Camera Intent and save that video or a copy of that video to

Pause & resume video capture using AVCaptureMovieFileOutput and AVCaptureVideoDataOutput in iOS

爷,独闯天下 提交于 2019-12-05 16:53:06
问题 I have to implement functionality to repeatedly pause and resume video capture in a single session, but have each new segment (the captured segments after each pause) added to the same video file, with AVFoundation . Currently, every time I press "stop" then "record" again, it just saves a new video file to my iPhone's Document directory and starts capturing to a new file. I need to be able to press the "record/stop" button over, only capture video & audio when record is active... then when

Problem with VideoCapture in OpenCV 2.3

半腔热情 提交于 2019-12-05 16:52:10
I have problem in using VideoCapture calss for opening a MPEG video file. The code was compile properly. However, during the running time, it can not open the file and give me the following warning message: warning: Error opening file (../../modules/highgui/src/cap_ffmpeg_impl.hpp:477) I have this problem only when I build my code in debug mode. I relsease mode the code works correctly. The code also works correctly in the c style using CvCapture and cvCaptureFromAVI (in both release and debug mode), however, I'd like to develop my code in more C++ style. (I am using OpenCV 2.3 in Visual

iOS Determine Number of Frames in Video

落花浮王杯 提交于 2019-12-05 15:33:36
If I have a MPMoviePlayerController in Swift: MPMoviePlayerController mp = MPMoviePlayerController(contentURL: url) Is there a way I can get the number of frames within the video located at url ? If not, is there some other way to determine the frame count? Rhythmic Fistman I don't think MPMoviePlayerController can help you. Use an AVAssetReader and count the number of CMSampleBuffer s it returns to you. You can configure it to not even decode the frames, effectively parsing the file, so it should be fast and memory efficient. Something like var asset = AVURLAsset(URL: url, options: nil) var