Using native functions in Android with OpenCV

后端 未结 2 471

I want to use OpenCV+Android, using native functions. However I am a little confused how to use bitmaps as parameters and how to return a value of an edited bitmap (or Mat).

相关标签:
2条回答
  • 2020-12-25 09:47

    You'd better read default OpenCV for Android samples (native).

    Of course, you cannot use cv::Mat as parameter, because this is c++ class not java. However if I'm not mistaken you can call c++ class methods from java source (it's also part of JNI).

    In your situation you have to use a pointer to image data (it may be uchar* or int* in c++, it's the same as byte[] or int[] in java). For example, you can get pixels from Android Bitmap using method getPixels. And in c++ you can use a specific mat constructor that takes a pointer to image data:

    // constructor for matrix headers pointing to user-allocated data
    Mat(int _rows, int _cols, int _type, void* _data, size_t _step=AUTO_STEP);
    Mat(Size _size, int _type, void* _data, size_t _step=AUTO_STEP);
    

    Hope, it helps.

    0 讨论(0)
  • 2020-12-25 09:51

    This is the OpenCV Tutorial code for Android. I remember that it took a while for me to understand the JNI convention. Just look into JNI code first

    #include <jni.h>
    #include <opencv2/core/core.hpp>
    #include <opencv2/imgproc/imgproc.hpp>
    #include <opencv2/features2d/features2d.hpp>
    #include <vector>
    
    using namespace std;
    using namespace cv;
    
    extern "C" {
    JNIEXPORT void JNICALL Java_org_opencv_samples_tutorial3_Sample3View_FindFeatures(JNIEnv* env, jobject thiz, jint width, jint height, jbyteArray yuv, jintArray bgra)
    {
        jbyte* _yuv  = env->GetByteArrayElements(yuv, 0);
        jint*  _bgra = env->GetIntArrayElements(bgra, 0);
    
        Mat myuv(height + height/2, width, CV_8UC1, (unsigned char *)_yuv);
        Mat mbgra(height, width, CV_8UC4, (unsigned char *)_bgra);
        Mat mgray(height, width, CV_8UC1, (unsigned char *)_yuv);
    
        //Please make attention about BGRA byte order
        //ARGB stored in java as int array becomes BGRA at native level
        cvtColor(myuv, mbgra, CV_YUV420sp2BGR, 4);
    
        vector<KeyPoint> v;
    
        FastFeatureDetector detector(50);
        detector.detect(mgray, v);
        for( size_t i = 0; i < v.size(); i++ )
            circle(mbgra, Point(v[i].pt.x, v[i].pt.y), 10, Scalar(0,0,255,255));
    
        env->ReleaseIntArrayElements(bgra, _bgra, 0);
        env->ReleaseByteArrayElements(yuv, _yuv, 0);
    }
    }
    

    and then Java code

    package org.opencv.samples.tutorial3;
    
    import android.content.Context;
    import android.graphics.Bitmap;
    
    class Sample3View extends SampleViewBase {
    
        public Sample3View(Context context) {
            super(context);
        }
    
        @Override
        protected Bitmap processFrame(byte[] data) {
            int frameSize = getFrameWidth() * getFrameHeight();
            int[] rgba = new int[frameSize];
    
            FindFeatures(getFrameWidth(), getFrameHeight(), data, rgba);
    
            Bitmap bmp = Bitmap.createBitmap(getFrameWidth(), getFrameHeight(), Bitmap.Config.ARGB_8888);
            bmp.setPixels(rgba, 0/* offset */, getFrameWidth() /* stride */, 0, 0, getFrameWidth(), getFrameHeight());
            return bmp;
        }
    
        public native void FindFeatures(int width, int height, byte yuv[], int[] rgba);
    
        static {
            System.loadLibrary("native_sample");
        }
    }
    
    0 讨论(0)
提交回复
热议问题