可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试):
问题:
I'm confused by the OpenCV Mat element types. This is from the docs:
There is a limited fixed set of primitive data types the library can operate on. That is, array elements should have one of the following types: 8-bit unsigned integer (uchar) 8-bit signed integer (schar) 16-bit unsigned integer (ushort) 16-bit signed integer (short) 32-bit signed integer (int) 32-bit floating-point number (float) 64-bit floating-point number (double) ... For these basic types, the following enumeration is applied: enum { CV_8U=0, CV_8S=1, CV_16U=2, CV_16S=3, CV_32S=4, CV_32F=5, CV_64F=6 };
It's known that C++ standard doesn't define the size of basic types in bytes, so how do they use such assumptions? And what type should I expect from, let's say, CV_32S, is it int32_t or int?
回答1:
Developing from Miki's answer,
In OpenCV 3 definition has moved to modules/core/include/opencv2/core/traits.hpp, where you can find:
/** @brief A helper class for cv::DataType The class is specialized for each fundamental numerical data type supported by OpenCV. It provides DataDepth::value constant. */ template class DataDepth { public: enum { value = DataType<_tp>::depth, fmt = DataType<_tp>::fmt }; }; template class TypeDepth { enum { depth = CV_USRTYPE1 }; typedef void value_type; }; template class TypeDepth { enum { depth = CV_8U }; typedef uchar value_type; }; template class TypeDepth { enum { depth = CV_8S }; typedef schar value_type; }; template class TypeDepth { enum { depth = CV_16U }; typedef ushort value_type; }; template class TypeDepth { enum { depth = CV_16S }; typedef short value_type; }; template class TypeDepth { enum { depth = CV_32S }; typedef int value_type; }; template class TypeDepth { enum { depth = CV_32F }; typedef float value_type; }; template class TypeDepth { enum { depth = CV_64F }; typedef double value_type; };
In most of the cases/compilers you should be fine using C++ exact data types. You wouldn't have problems with single byte data types (CV_8U -> uint8_t and CV_8U -> int8_t) as unambiguously defined in C++. The same for float (32bit) and double (64bit). However, it is true that for other data types to be completely sure you use the correct data type (for example when using the at method) you should use for example:
typedef TypeDepth::value_type access_type; myMat.at(y,x) = 0;
As a side note, I am surprised they decided to take such an ambiguous approach, instead of simply using exact data types.
Therefore, regarding your last question:
What type should I expect from, let's say, CV_32S?
I believe the most precise answer, in OpenCV 3, is:
TypeDepth::value_type
回答2:
In core.hpp you can find the following:
/*! A helper class for cv::DataType The class is specialized for each fundamental numerical data type supported by OpenCV. It provides DataDepth::value constant. */ template class DataDepth {}; template class DataDepth { public: enum { value = CV_8U, fmt=(int)'u' }; }; template class DataDepth { public: enum { value = CV_8U, fmt=(int)'u' }; }; template class DataDepth { public: enum { value = CV_8S, fmt=(int)'c' }; }; template class DataDepth { public: enum { value = CV_8S, fmt=(int)'c' }; }; template class DataDepth { public: enum { value = CV_16U, fmt=(int)'w' }; }; template class DataDepth { public: enum { value = CV_16S, fmt=(int)'s' }; }; template class DataDepth { public: enum { value = CV_32S, fmt=(int)'i' }; }; // this is temporary solution to support 32-bit unsigned integers template class DataDepth { public: enum { value = CV_32S, fmt=(int)'i' }; }; template class DataDepth { public: enum { value = CV_32F, fmt=(int)'f' }; }; template class DataDepth { public: enum { value = CV_64F, fmt=(int)'d' }; }; template class DataDepth<_tp> { public: enum { value = CV_USRTYPE1, fmt=(int)'r' }; };
You can see that CV_32S is the value for the type int, not int32_t.
回答3:
While C++ doesn't define the size of an element, the question is hypothetical: for systems OpenCV is run on, the sizes are known. Given
cv::Mat m(32,32,CV_32SC1, cv:Scalar(0)); std::cout
So how can you be sure it is int?
An attempt to call
int pxVal = m.at(0,0);
will
CV_DbgAssert( elemSize()==sizeof(int) );
Where the left hand is defined via the cv::Mat::flags -- in this example as the predefined depth of the CV_32SC1 equal to
CV_DbgAssert( m.depth() == sizeof(int) )
or
CV_DbgAssert( 4 == sizeof(int) )
So if you succeeded you are left only the endianness. And that was checked when the cvconfig.h was generated (by CMake).
TL;DR, expect the types given in the header and you'll be fine.
回答4:
回答5:
I have found several #define in OpenCV's code related to CV_8UC1, CV_32SC1, etc. To make the enumerations work, OpenCV put additional codes to convert the plain numbers together as a parameter (i.e, CV_8UC1, CV_16UC2...are all represented by their respective numbers), and break the depth and channels apart in the definition of CvMat(I guess Mat may have similar codes in its definition). Then, it uses create() to allocate spaces for the matrix. Since create() is inline, I can only guess that it is similar to malloc() or something.
As source codes changes a lot from 2.4.9 to 3.0.0, I need to post more evidence later. Please allow me a little time to find out more and edit my answer.
回答6:
In short the table you provided is correct. If you want to directly access a pixel, you typecast it to the specifier to the right, for example CV_32S is a signed 32-bit. The S always means a signed integral number (signed char, signed short, signed int) The F always means a floating point number (float, double) The U always means an unsigned integral number.
The enumeration is used only when creating or converting a Mat. It's a way of telling the mat which is the desired type, as I understand it it's the C predecessor to when templates were not used.
I use the C functionality exclusively, and in order to create an image, it would be an error to pass the following:
cvCreateImage(mySize,char, nChannels);
Instead, I pass the following:
cvCreateImage(mySize, IPL_DEPTH_8U, nChannels);
Here, the IPL_DEPTH_8U is a flag that is used by the function. The function itself has a switch-type statement that checks the flag. The actual value of the flag is most often meaningless as it's most often controlled by conditional, not algebraic statements.