Recording video on Android using JavaCV (Updated 2014 02 17)

前端 未结 3 570
难免孤独
难免孤独 2021-02-04 19:27

I\'m trying to record a video in Android using the JavaCV lib. I need to record the video in 640x360.

I have installed everything as described in README.txt file and I f

3条回答
  •  眼角桃花
    2021-02-04 19:43

    @Fabio Seeing that your code is from this Open Source Android Touch-To-Record library and I too have used it. Here is my modified version of the onPreviewFrame method, inside CameraPreview class, to take transpose and resize a captured frame, as the captured video played sideways (app was locked to portrait) and with greenish output.

    I defined "yuvIplImage" as following in my setCameraParams() method.

    IplImage yuvIplImage = IplImage.create(mPreviewSize.height, mPreviewSize.width, opencv_core.IPL_DEPTH_8U, 2);
    

    Also initialize your videoRecorder object as following, giving width as height and vice versa.

    //call initVideoRecorder() method like this to initialize videoRecorder object of FFmpegFrameRecorder class.
    initVideoRecorder(strVideoPath, mPreview.getPreviewSize().height, mPreview.getPreviewSize().width, recorderParameters);
    
    //method implementation
    public void initVideoRecorder(String videoPath, int width, int height, RecorderParameters recorderParameters)
    {
        Log.e(TAG, "initVideoRecorder");
    
        videoRecorder = new FFmpegFrameRecorder(videoPath, width, height, 1);
        videoRecorder.setFormat(recorderParameters.getVideoOutputFormat());
        videoRecorder.setSampleRate(recorderParameters.getAudioSamplingRate());
        videoRecorder.setFrameRate(recorderParameters.getVideoFrameRate());
        videoRecorder.setVideoCodec(recorderParameters.getVideoCodec());
        videoRecorder.setVideoQuality(recorderParameters.getVideoQuality());
        videoRecorder.setAudioQuality(recorderParameters.getVideoQuality());
        videoRecorder.setAudioCodec(recorderParameters.getAudioCodec());
        videoRecorder.setVideoBitrate(1000000);
        videoRecorder.setAudioBitrate(64000);
    }
    

    This is my onPreviewFrame() method:

    @Override
    public void onPreviewFrame(byte[] data, Camera camera)
    {
    
        long frameTimeStamp = 0L;
    
        if(FragmentCamera.mAudioTimestamp == 0L && FragmentCamera.firstTime > 0L)
        {
            frameTimeStamp = 1000L * (System.currentTimeMillis() - FragmentCamera.firstTime);
        }
        else if(FragmentCamera.mLastAudioTimestamp == FragmentCamera.mAudioTimestamp)
        {
            frameTimeStamp = FragmentCamera.mAudioTimestamp + FragmentCamera.frameTime;
        }
        else
        {
            long l2 = (System.nanoTime() - FragmentCamera.mAudioTimeRecorded) / 1000L;
            frameTimeStamp = l2 + FragmentCamera.mAudioTimestamp;
            FragmentCamera.mLastAudioTimestamp = FragmentCamera.mAudioTimestamp;
        }
    
        synchronized(FragmentCamera.mVideoRecordLock)
        {
            if(FragmentCamera.recording && FragmentCamera.rec && lastSavedframe != null && lastSavedframe.getFrameBytesData() != null && yuvIplImage != null)
            {
                FragmentCamera.mVideoTimestamp += FragmentCamera.frameTime;
    
                if(lastSavedframe.getTimeStamp() > FragmentCamera.mVideoTimestamp)
                {
                    FragmentCamera.mVideoTimestamp = lastSavedframe.getTimeStamp();
                }
    
                try
                {
                    yuvIplImage.getByteBuffer().put(lastSavedframe.getFrameBytesData());
    
                    IplImage bgrImage = IplImage.create(mPreviewSize.width, mPreviewSize.height, opencv_core.IPL_DEPTH_8U, 4);// In my case, mPreviewSize.width = 1280 and mPreviewSize.height = 720
                    IplImage transposed = IplImage.create(mPreviewSize.height, mPreviewSize.width, yuvIplImage.depth(), 4);
                    IplImage squared = IplImage.create(mPreviewSize.height, mPreviewSize.height, yuvIplImage.depth(), 4);
    
                    int[] _temp = new int[mPreviewSize.width * mPreviewSize.height];
    
                    Util.YUV_NV21_TO_BGR(_temp, data, mPreviewSize.width,  mPreviewSize.height);
    
                    bgrImage.getIntBuffer().put(_temp);
    
                    opencv_core.cvTranspose(bgrImage, transposed);
                    opencv_core.cvFlip(transposed, transposed, 1);
    
                    opencv_core.cvSetImageROI(transposed, opencv_core.cvRect(0, 0, mPreviewSize.height, mPreviewSize.height));
                    opencv_core.cvCopy(transposed, squared, null);
                    opencv_core.cvResetImageROI(transposed);
    
                    videoRecorder.setTimestamp(lastSavedframe.getTimeStamp());
                    videoRecorder.record(squared);
                }
                catch(com.googlecode.javacv.FrameRecorder.Exception e)
                {
                    e.printStackTrace();
                }
            }
    
            lastSavedframe = new SavedFrames(data, frameTimeStamp);
        }
    }
    

    This code uses a method "YUV_NV21_TO_BGR", which I found from this link

    Basically this method is used to resolve, which I call as, "The Green Devil problem on Android", just like yours. I was having the same issue and wasted almost 3-4 days. Before adding "YUV_NV21_TO_BGR" method when I just took transpose of YuvIplImage, more importantly a combination of transpose, flip (with or without resizing), there was greenish output in resulting video. This "YUV_NV21_TO_BGR" method saved the day. Thanks to @David Han from above google groups thread.

提交回复
热议问题