OpenCV

Perspective Transform in OPENCV PYTHON

♀尐吖头ヾ 提交于 2021-01-05 07:05:46
问题 I am trying to perform a perspective transform of a sudoku puzzle. The expected transformation is happening only on the left side. Please help me by pointing out my mistake. Input Image: Expected Output Image: The output I am getting: The corners of the sudoku puzzle found using cv2.approxpolydp() are as follows: top_left = [71,62] top_right = [59, 418] bottom_right = [443, 442] bottom_left = [438, 29] The shape of the output image is [300,300]. The corresponding output coordinates are :

Understanding openCV aruco marker detection/pose estimation in detail: subpixel accuracy

浪子不回头ぞ 提交于 2021-01-05 06:51:30
问题 I am currently studying openCV's 'aruco' module, especially focusing on the poseEstimation of ArUco markers and AprilTags. Looking into the subpixel accuracy, I have encountered a strange behaviour, which is demonstrated by the code below: If I do provide a 'perfect' calibration (e. g. cx/cy equals image center and distortion is set to zero) and a 'perfect' marker with known edge length, cv.detectMarkers will only yield the correct value, if the rotation is at 0/90/180 or 270 degrees. The

Understanding openCV aruco marker detection/pose estimation in detail: subpixel accuracy

自作多情 提交于 2021-01-05 06:50:35
问题 I am currently studying openCV's 'aruco' module, especially focusing on the poseEstimation of ArUco markers and AprilTags. Looking into the subpixel accuracy, I have encountered a strange behaviour, which is demonstrated by the code below: If I do provide a 'perfect' calibration (e. g. cx/cy equals image center and distortion is set to zero) and a 'perfect' marker with known edge length, cv.detectMarkers will only yield the correct value, if the rotation is at 0/90/180 or 270 degrees. The

Pixelate ROI bounding box and overlay it on original image using OpenCV

蓝咒 提交于 2021-01-05 06:00:13
问题 Lets make it straightforward. I have private project to block or pixelate image using boundary box in open-cv, something like censoring image, inspired from this paper: https://www.researchgate.net/publication/325746502_Seamless_Nudity_Censorship_an_Image-to-Image_Translation_Approach_based_on_Adversarial_Training I have found the way to classify the area of censor using Keras, but still don't know the way how to use the boundary box to pixelate the classified area, and overlay it to original

Read blurry barcode in python with pyzbar

我只是一个虾纸丫 提交于 2021-01-05 05:51:42
问题 I have been trying to read some barcodes from images uzing Python and pyzbar. Unfortunately, the images are taken from several feet away under several constraints and I cannot move or zoom the camera any closer. Is it possible to read barcodes this blurry using any existing Python libraries? So far I've tried some preprocessing including thresholding, sharpening, applying a vertical closing filter, and Wiener filtering, but none seem to help. I am probably asking for a miracle, but if you

由“Qt程序运行一段时间后崩溃”引发的“opancv库中Mat::clone()函数”在多线程下的注意事项

怎甘沉沦 提交于 2021-01-04 08:00:20
问题描述 过程1:从相机中获取图像数据,然后存放到一个cv::Mat对象中(该对象是全局变量,用来交换数据)。由相机的回调函数自动调用。 过程2:将上述的全局变量拷贝并转换qimg,放到Qt界面上显示。该过程由定时器调用。 然后程序会在运行一段时间后,出现“程序异常结束。The process was ended forcefully.”。运行的时间长短不一。 问题解决与分析 由于QtCreator的编译器选的是MSVC,而调试器选只有GDB(查了下好像需要CDB)。所以无法debug,只能一点点排查。 测试定时器时间越短,出现问题越快。猜测是多线程下访问冲突。 输出线程id查看,使用std::this_thread::get_id()获取当前线程的ID,发现相机写入Mat对象的过程的线程号 和 定时器调用的读取Mat对象的线程号不一样。这说明相机的SDK在获取图像数据的部分是创建了新的线程进行的。 可是读写应该不冲突,所以看看opencv的Mat::clone()方法。 inline Mat Mat::clone() const { Mat m; copyTo(m); return m; } // 噢 原来是调用的cv::copyTo方法,等等,上面有个const,这下明白了,在拷贝的时候是不允许修改值的,如果正在拷贝,此时相机写入线程正好获取了相机数据,准备写入

Opencv + opencv_contrib + Tesseract 之Qt开发环境搭建

那年仲夏 提交于 2021-01-04 07:35:41
1.软件包准备 opencv源码包地址: 官网 github opencv_contrib源码包地址: github Tesseract源码包地址: github cmake.exe 下载地址: 官网 qt 下载地址: 官网 注意: opencv和open_contrib包的版本号要一致(比如都是3.4.0) Tesseract源码安装参考: Win10 使用MinGW-w64编译Tesseract4.0 2. 在环境变量PATH中添加: C:\Qt\Qt5. 9.0 \ 5.9 \mingw53_32\bin C:\Qt\Qt5. 9.0 \Tools\mingw530_32\bin 一方面方便日后在cmd中直接使用gcc、g++,qmake和mingw32-make 另一方面, 方便下一步cmake查找Qt相关配置 3. 使用cmake生成解决方案 如果提示: 直接将 "CMAKE_SH" 项删除即可。 修改配置如下: CMAKE_BUILD_TYPE : Debug或者Release CMAKE_INSTALL_PREFIX : 指定程序安装位置 ENABLE_CXX11 : 支持c11特性 WITH_QT WITH_OPENGL OPENCV_EXTRA_MODULES_PATH: 若使用opencv_contrib模块,则在此处填写解压后的路径,如 F:\opencv

opencv_createsamples: command not found

ⅰ亾dé卋堺 提交于 2021-01-04 06:41:26
问题 Everytime I run the command opencv_createsamples , I am hit with this error. I know that I need to run opencv from the src but I am not sure how to fully do so. Can somebody give me a detailed and simplified explanation of how I would go about this? 来源: https://stackoverflow.com/questions/61977772/opencv-createsamples-command-not-found

Reading .exr files in OpenCV

醉酒当歌 提交于 2021-01-04 04:18:30
问题 I have generated some depth maps using blender and have saved z-buffer values(32 bits) in OpenEXR format. Is there any way to access values from a .exr file (pixel by pixel depth info) using OpenCV 2.4.13 and python 2.7? There is no example anywhere to be found. All I can see in documentation that this file format is supported. But trying to read such a file results in error. new=cv2.imread("D:\\Test1\\0001.exr") cv2.imshow('exr',new) print new[0,0] Error: print new[0,0] TypeError: 'NoneType'

Reading .exr files in OpenCV

北城以北 提交于 2021-01-04 04:15:28
问题 I have generated some depth maps using blender and have saved z-buffer values(32 bits) in OpenEXR format. Is there any way to access values from a .exr file (pixel by pixel depth info) using OpenCV 2.4.13 and python 2.7? There is no example anywhere to be found. All I can see in documentation that this file format is supported. But trying to read such a file results in error. new=cv2.imread("D:\\Test1\\0001.exr") cv2.imshow('exr',new) print new[0,0] Error: print new[0,0] TypeError: 'NoneType'