video-processing

Video cannot be processed error notification while uploading a video [closed]

淺唱寂寞╮ 提交于 2019-12-05 08:01:09
问题 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. Closed 6 years ago . when i try to upload a video of different formats(including .mp4 and .3gp files) to facebook from my android application, i get a notification in my

Opengl es 2.0 draw bitmap overlay on video

走远了吗. 提交于 2019-12-05 06:55:41
I am trying to draw a bitmap as an overlay on every frame of the video. I found an example on how to decode and encode a video and it is working. This example has a TextureRenderer class with a drawFrame function that I need to modify in order to add the bitmap. I am newbie to opengl but I learnt that I need to create a texture with the bitmap and bind it. I tried that in the following code but it is throwing an exception. /* * Copyright (C) 2013 The Android Open Source Project * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance

Android App to Stream IP Camera using P2P mode over Mobile port?

僤鯓⒐⒋嵵緔 提交于 2019-12-05 06:03:21
I am trying to stream the video via IP camera on an Android App. I have been successful to get the video using the following: RTSP using Media Player and Surface View. RTSP using Video View. Redirecting to VLC for streaming the video. Redirecting to Native Video Player for streaming the video. Now, all these cases are working but with a lag of 7-8 seconds on average even over the local network. I see some apps that steam without any lag even on remote network, that use mobile port (18600) . This port is normally associated with P2P mode on a camera.(This assumption is purely based on my

How to merge an Audio and Video files in Android

跟風遠走 提交于 2019-12-05 05:34:06
I have a small video clip and an audio file. The problem is how to write code to merge them into a single file. i have never written code for multimedia applications for android and don't know if the merging is possible with android media framework. Is there any third party library to do that? Can we right a merging code in Java and call it in Android? Please guide me through this. Thanks You can try INDE Media for Mobile, tutorials are here: https://software.intel.com/en-us/articles/intel-inde-media-pack-for-android-tutorials It has a sample demonstating how to substitute audio track in mp4

Detecting people crossing a line with OpenCV

匆匆过客 提交于 2019-12-05 05:12:28
问题 I want to count number of people crossing a line from either side. I have a camera that is placed on ceiling and shooting for the floor where the line is (So camera sees just top of people heads; and so it is more of object detection than people detection). Is there any sample solution for this problem or similar problems like this? So I can learn from them? Edit 1: More than one person is crossing the line at any moment. 回答1: If nothing else but humans are subject to cross the line then you

Local enhancing of license plate in video sequence

若如初见. 提交于 2019-12-05 03:06:09
问题 My goal is to create an enhanced image with a more readable license plate number from a given sequence of images with indistinguishable license plates on driving cars, such as the sequence below. As you can see, the plate number is, for the most part, indistinguishable. I am looking into implementations for enhancing such as super-resolution of multiple frames (as I have researched in this paper: http://users.soe.ucsc.edu/~milanfar/publications/journal/SRfinal.pdf). I have some experience

How to Batch Multiple Videoframes before run Tensorflow Inference Session

ぃ、小莉子 提交于 2019-12-05 02:15:17
问题 I made a project that basically uses googles object detection api with tensorflow. All i am doing is inference with a pre-trained model: Which means realtime object detection where the Input is the Videostream of a webcam or something similar using OpenCV. Right now i got pretty decent performance results, but i want to further increase the FPS. Because what i experience is that Tensorflow uses my whole Memory while Inference but the GPU Usage is not maxed out at all (around 40% with a NVIDIA

h264 packetization mode for FUA

一个人想着一个人 提交于 2019-12-05 01:16:55
问题 We have got into couple of interop issues where, The video mode that is required by couple of endpoints in market are little different and only understands H.264 packetization modes (FUA type) (i.e) FU -A NAL unit type.(while others do not play the video on receiving a fu-a nal type payload) Does anyone know what is this FUA type of packetization mode? How is it different from packetization modes 0,1,2 as defined in RFC3984? Is the video encoder/decoder supports it, how can it be

OpenCV doesn't report accurate frame rate/count

雨燕双飞 提交于 2019-12-04 23:45:06
I have a 33 second video that I'm trying to process with OpenCV. My goal is to determine what instance in time (relative to the start of the video) each frame corresponds to. I'm doing this in order to be able to compare frames from videos of the same scene that have been recorded at different frame rates. What's working: The FPS is correctly reported as 59.75. This is consistent with what ffprobe reports, so I'm happy to believe that's correct. The problems I'm having are: CAP_PROP_POS_MSEC returns incorrect values. By the end of the video, it's up to 557924ms (over 9 min). For a 33s video,

Get Video and Audio buffer separately while recording video using front camera

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-04 23:03:12
问题 I dug a lot on SO and some nice blog post But seems I am having unique requirement of reading Video and Audio buffer separately for further processing on it while recording going on. My use case is like When the user starts the Video recording, I need to continuously process the Video frame using ML-Face-Detection-Kit and also continuously process the Audio frame to make sure user is speaking out something and detect the noise level as well. For this, I think I need both Video and Audio in a