video-processing

HEVC: NAL unit trace file for each NUL unit produced by the encoder

别说谁变了你拦得住时间么 提交于 2019-12-25 02:59:05
问题 I'm using HM 14.0 as a reference. Is there a way to get some specific information about NAL units like (a) type (b) num_bytes (c) frame_no (d) decode_time (e) priority (f) timestamp The first two I can have it through annexBbytecount, but what about the rest? 回答1: The reference codec comes with a built in tracer which is quite powerful! Enable it like this: diff --git a/source/Lib/TLibCommon/TComRom.h b/source/Lib/TLibCommon/TComRom.h index 5a59809..1930809 100644 --- a/source/Lib/TLibCommon

Counting the point which intercept in a line with opencv python

ぐ巨炮叔叔 提交于 2019-12-25 01:48:16
问题 I am working in vehicle counting with opencv and python programming, I already complete step: 1. Detect moving vehicle with BackgroundSubtractorMOG2 2. Draw rectangle on it, then poin a centroid of that 3. Draw a line (to indicate of the counting) if that centroid accros/intercept with the line I want count that 1. but in my code sometime it add sometime no. Here the line code: cv2.line(frame,(0,170),(300,170),(200,200,0),2) and there the centroid: if w > 20 and h > 25: cv2.rectangle(frame,

How can I use gstreamer & smpte to concatenate 2 video files with gst-launch?

江枫思渺然 提交于 2019-12-24 20:26:27
问题 I have 2 video files (vid1.mov and vid2.mov), both have the same frame size and frame rate. I want to have 1 final video with shows vid1.mov and then vid2.mov, one after the other. I also want there to be a transition from one video to another (rather than an abrupt change of video), and have discovered the smpte plugin for gstreamer, which goes what I want. Using gst-launch on the ubuntu linux command line, how can I merge the 2 videos together with a transition? (Assume I want to use the

How to make my video in landscape mode using ffmpeg

纵饮孤独 提交于 2019-12-24 18:15:12
问题 I have four video chats. Somehow I have managed to cut videos into pieces , stored in array then stacked and finally concat the video which is in the youtube link down below. I have used the size in the portrait view is 640*480. But I need to show them in the landscape. Suggest me any ideas. Landscape view: https://youtu.be/u8tmL2-CdK0 Portrait view: https://youtu.be/lO-Q3I9X8OA These are my inputs Input #0, matroska,webm, from 'PA473fbf06ed1f952f95c88b9cf22ed0ba_pre.mkv': Metadata: encoder :

How to make my video in landscape mode using ffmpeg

为君一笑 提交于 2019-12-24 18:14:12
问题 I have four video chats. Somehow I have managed to cut videos into pieces , stored in array then stacked and finally concat the video which is in the youtube link down below. I have used the size in the portrait view is 640*480. But I need to show them in the landscape. Suggest me any ideas. Landscape view: https://youtu.be/u8tmL2-CdK0 Portrait view: https://youtu.be/lO-Q3I9X8OA These are my inputs Input #0, matroska,webm, from 'PA473fbf06ed1f952f95c88b9cf22ed0ba_pre.mkv': Metadata: encoder :

RTSP Frame Grabbing creates smeared , pixeled and corrupted images

六眼飞鱼酱① 提交于 2019-12-24 14:39:12
问题 I am trying to capture a single frame per second from a RTSP stream with following command ffmpeg -i rtsp://XXX -q:v 1 -vf fps=fps=1 -strftime 1 ZZZZ\%H_%M_%S.jpg But some of the frames are smeared ,pixeled and corrupted - this effect is drastically increases if rtsp resolution is increased (if the resolution is decreased for example to 720P most of the frames are OK) I have to say that playing same rtsp stream in VLC or FFPLAY is flowless. How I can fix it to grab better quality Thanks in

File Format for Saving Video with alpha channel in iOS

孤者浪人 提交于 2019-12-24 12:42:09
问题 I am using AVFoundation to create a video and have added in an effect to clip the video so there is a clear background. What file format should I save this as to preserve the transparency in my iOS app. 回答1: AVAnimator is a library with which you can display video with an alpha channel on iOS, it is however not free to use for commercial products. I don't think it's natively possible. 来源: https://stackoverflow.com/questions/28258575/file-format-for-saving-video-with-alpha-channel-in-ios

Filter on playing video on GLSurfaceView at runtime

你。 提交于 2019-12-24 07:18:55
问题 I used https://github.com/krazykira/VidEffects to apply filter on playing video. But I want to change the filter on click of button at runtime without any glitch on playing video. According to Applying Effects on Video being Played I should use mVideoView.init(mMediaPlayer,new filter) whenever I want to change filter.But there is no effect on playing video Can someone help me... I am not experienced in using GLSurfaceView. Here is my java class public class MainActivity extends Activity {

Tensorflow Object Detection API with GPU on Windows and real-time detection

风格不统一 提交于 2019-12-24 07:01:48
问题 I am testing the new Tensorflow Object Detection API in Python, and I succeeded in installing it on Windows using docker. However, my trained model (Faster RCNN resnet101 COCO) takes up to 15 seconds to make a prediction (with very good accuracy though), probably because I only use Tensorflow CPU. My three questions are: Considering the latency, where is the problem? I heard Faster RCNN was a good model for low latency visual detection, is it because of the CPU-only execution? With such

ffmpeg sepia effect on video

落爺英雄遲暮 提交于 2019-12-24 03:53:18
问题 How can I apply simple sepia effect of a video using FFmpeg ? I am seeking for a single line FFmpeg command which I will be using in android.Though I have learnt colorchannelmixer=.393:.769:.189:0:.349:.686:.168:0:.272:.534:.131 on official FFmpeg doc , but unable to apply it properly.Thank you. 回答1: You just need to chain the filters appropriately. But in your approach, using eq filter may be difficult to implement the sepia matrix with FFmpeg as it has an associated matrix. Instead I