video-processing

How to convert sRGB to NV12 format using NumPy?

久未见 提交于 2020-01-03 20:54:15
问题 NV12 format defines specific color channels ordering of YUV color space with 420 subsampling. NV12 format is mostly used in video encoding/decoding pipeline. libyuv description of NV12: NV12 is a biplanar format with a full sized Y plane followed by a single chroma plane with weaved U and V values. NV21 is the same but with weaved V and U values. The 12 in NV12 refers to 12 bits per pixel. NV12 has a half width and half height chroma channel, and therefore is a 420 subsampling. In context of

How to convert sRGB to NV12 format using NumPy?

五迷三道 提交于 2020-01-03 20:53:51
问题 NV12 format defines specific color channels ordering of YUV color space with 420 subsampling. NV12 format is mostly used in video encoding/decoding pipeline. libyuv description of NV12: NV12 is a biplanar format with a full sized Y plane followed by a single chroma plane with weaved U and V values. NV21 is the same but with weaved V and U values. The 12 in NV12 refers to 12 bits per pixel. NV12 has a half width and half height chroma channel, and therefore is a 420 subsampling. In context of

How to convert sRGB to NV12 format using NumPy?

╄→гoц情女王★ 提交于 2020-01-03 20:53:26
问题 NV12 format defines specific color channels ordering of YUV color space with 420 subsampling. NV12 format is mostly used in video encoding/decoding pipeline. libyuv description of NV12: NV12 is a biplanar format with a full sized Y plane followed by a single chroma plane with weaved U and V values. NV21 is the same but with weaved V and U values. The 12 in NV12 refers to 12 bits per pixel. NV12 has a half width and half height chroma channel, and therefore is a 420 subsampling. In context of

Why does video resolution change when streaming from Android via WebRTC

China☆狼群 提交于 2020-01-03 07:27:19
问题 I'm trying to stream at 640x480 from Chrome on Android using WebRTC, and the video starts off at that, but then the resolution drops to 320x240. Here are the getUserMedia parameters that are sent: "getUserMedia": [ { "origin": "http://webrtc.example.com:3001", "pid": 30062, "rid": 15, "video": "mandatory: {minWidth:640, maxWidth:640, minHeight:480, maxHeight:480}" } My question is why does the resolution fall? When I try it from Chrome on my Mac that does not happen. I would like to make

Do you know of a NTSC decoder API? [closed]

强颜欢笑 提交于 2020-01-02 15:05:33
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed last year . I'm looking for an API that I can use to decode a digital sample of an analog signal, encoded according to the http://en.wikipedia.org/wiki/NTSC">NTSC standard. I'm willing to consider both free and commercial options. If I have to, I'll roll the code myself, but I imagine that this code has been written tens or

Android video remove chroma key background

99封情书 提交于 2020-01-02 08:56:17
问题 I have checked this question. It is very similar: I want to record a video with android camera. After that with a library remove the background, which is with chroma key. First I think I should use android NDK in order to escape from SDK memory limitation and use the whole memory. The length of the video is short, a few seconds so maybe is able to handle it. I would prefer to use an SDK implementation and set the android:largeHeap="true" , because of mismatching the .so files architecture.

FFmpeg concatenation, no Audio in Final Output

雨燕双飞 提交于 2020-01-02 07:15:32
问题 I have the following command working in ffmpeg, which adds 1 second of a black frame to the beginning of the video. However, I lose the audio from the original video in the output video. How can I adjust the command to make sure the original audio stays with the final output, or better yet, there is 1 second of "blank" audio at the beginning so it matches the new output video. ffmpeg -i originalvideo -f lavfi -i color=c=black:s=1920x1080:r=25:sar=1/1 -filter_complex "[0:v] setpts=PTS-STARTPTS

Cannot play certain videos

心不动则不痛 提交于 2020-01-02 03:50:29
问题 I'm trying to play movies on the Android device from our server. It is not a media server, just a regular Apache server. We use the same API to access the videos on the iPhone and it works fine. On the Android device, certain videos work, and others do not. They were all created the same way, except the majority of the ones that don't work are composed of still images and audio. We have tried re-encoding them with Videora, and tried hinting them with MP4Box. All of the videos play perfectly

How to convert video to spatio-temporal volumes in python

北城余情 提交于 2020-01-01 20:01:23
问题 I am doing my project in video analytics. I have to densely sample video. Sampling means converting video to spatio-temporal video volumes. I am using the python language. How can I do that in python? Is that option available in opencv or any other package? Input video sequence and desired output is shown 回答1: Read the video file using cap = cv2.VideoCapture(fileName) Go over each frame: while(cap.isOpened()): # Read frame ret, frame = cap.read() If you want you can just insert every frame to

OpenCV 2.4 in python - Video processing

主宰稳场 提交于 2019-12-31 09:05:23
问题 The Project: Add a running date/time stamp on each and every frame of a video. (The result of digital video camera, and my father asked me how can he add the timestamp(to the milliseconds resolution) permanently to the video. A friend pointed me to opencv (emgucv actually) , and because of my preferences I tried my luck with opencv in python. The documentation is lame, and I had even hard time(took me like 5 hours or so) just to install the package. Sources: OpenCV 2.4 (willow garage): http:/