OpenCV

GPU with OpenCL is slower than CPU. Why?

こ雲淡風輕ζ 提交于 2021-01-29 09:08:18
问题 Environment: Intel i7-9750H Intel UHD Graphics 630 Nvidia GTX1050 (Laptop) Visual studio 2019 / C++ OpenCV 4.4 OpenCL 3.0 (intel) / 1.2 (nvidia) I'm trying to use OpenCL to speed up my code. But the result shows CPU is faster than GPU. How could I speed up my code? void GetHoughLines(cv::Mat dst) { cv::ocl::setUseOpenCL(true); int img_w = dst.size().width; // 5000 int img_h = dst.size().height; // 4000 cv::UMat tmp_dst = dst.getUMat(cv::ACCESS_READ); cv::UMat tmp_mat = cv::UMat(dst.size(), CV

Can't convert image to grayscale when using OpenCV

强颜欢笑 提交于 2021-01-29 09:04:47
问题 I have a transparent logo that I want to convert to grayscale using OpenCV. I am using the following code def to_grayscale(logo): gray = cv2.cvtColor(logo, cv2.COLOR_RGB2GRAY) blur = cv2.GaussianBlur(gray, (5, 5), 0) canny = cv2.Canny(blur, 50, 150) # sick return canny This is the image variable: brand_logo = Image.open(current_dir + '/logos/' + logo_image, 'r').convert('RGBA') brand_logo = to_grayscale(brand_logo) And this is the error: TypeError: Expected Ptr<cv::UMat> for argument 'src' I

Open high resolution images with Opencv

蓝咒 提交于 2021-01-29 09:01:07
问题 I can't open a 24MP pictures on Python with opencv. It only opens the upper left part apparently and not the full image. The kernel also stops after running the code. Here's my code: import cv2 import numpy as np PICTURE_PATH_NAME = "IMG.JPG" img = cv2.imread(PICTURE_PATH_NAME) gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) cv2.imshow("Gray Image", gray) cv2.waitKey(0) 回答1: See the documentation for imshow as to how to get it to scale your image to fit the window at https://docs.opencv.org/4.1

Cvlib not showing boxes, labels and confidence

风流意气都作罢 提交于 2021-01-29 08:51:34
问题 I am trying to replicate a simple object detection that I found in on website. import cv2 import matplotlib.pyplot as plt import cvlib as cv from cvlib.object_detection import draw_bbox im = cv2.imread('downloads.jpeg') bbox, label, conf = cv.detect_common_objects(im) output_image = draw_bbox(im, bbox, label, conf) plt.imshow(output_image) plt.show() All required libraries are installed and there are no errors running the code. However, it does not show the output image with the boxes, labels

Detect object by name of card and crop it using openCV

孤街醉人 提交于 2021-01-29 08:41:21
问题 I have a image with ID card, Bank card and signature i want to get id_card.jpg and bank_card.jpg and signature.jpg . The problem ID card and Bank card has the same width and height, i don't know how to detect each other. But the color is different suggestion possible get by color or the best idea is to get name of each card and after crop each card by name ?? I'm so new in this domain and i work in emergency project that why i will very grateful if someone can help me. The image look like

Why cv2.line can't draw on 1 channel numpy array slice inplace?

只愿长相守 提交于 2021-01-29 08:37:36
问题 Why cv2.line can't draw on 1 channel numpy array slice inplace? print('cv2.__version__', cv2.__version__) # V1 print('-'*60) a = np.zeros((20,20,4), np.uint8) cv2.line(a[:,:,1], (4,4), (10,10), color=255, thickness=1) print('a[:,:,1].shape', a[:,:,1].shape) print('np.min(a), np.max(a)', np.min(a), np.max(a)) # V2 print('-' * 60) b = np.zeros((20,20), np.uint8) cv2.line(b, (4,4), (10,10), color=255, thickness=1) print('b.shape', b.shape) print('np.min(b), np.max(b)', np.min(b), np.max(b))

Converting CV_32FC1 to CV_16UC1

对着背影说爱祢 提交于 2021-01-29 08:32:36
问题 I am trying to convert a float image that I get from a simulated depth camera to CV_16UC1 . The camera publishes the depth in CV_32FC1 format. I tried many ways but the result was not reasonable. cv::Mat depth_cv(512, 512, CV_32FC1, depth); cv::Mat depth_converted; depth_cv.convertTo(depth_converted,CV_16UC1); The result is a black image. If I use a scale factor, the image will be white. I also tried to do it this way: float depthValueF [512*512]; for (int i=0;i<resolution[1];i++){ // go

Change cuda::GpuMat values through custom kernel

為{幸葍}努か 提交于 2021-01-29 08:12:17
问题 I am using a kernel to "loop" over a live camera stream to highlight specific color regions. These can not always be reconstructed with some cv::threshold s, therefor I am using a kernel. The current kernel is as following: __global__ void customkernel(unsigned char* input, unsigned char* output, int width, int height, int colorWidthStep, int outputWidthStep) { const int xIndex = blockIdx.x * blockDim.x + threadIdx.x; const int yIndex = blockIdx.y * blockDim.y + threadIdx.y; if ((xIndex <

Normalizing while keeping the value of 'dst' as an empty array

我是研究僧i 提交于 2021-01-29 07:55:50
问题 I was trying to normalize a simple numpy array a as follows: a = np.ones((3,3)) cv2.normalize(a) On running this, OpenCV throws an error saying TypeError: Required argument 'dst' (pos 2) not found . So I put the dst argument as also mentioned in the documentation. Here is how I did: b = np.asarray([]) cv2.normalize(a, b) This call returns the normalized array but the value of b is still empty. Why is it so? On the other hand, if I try the following: b = np.copy(a) cv2.normalize(a,b) The

OpenCV - native Android integration

风格不统一 提交于 2021-01-29 07:50:18
问题 I have been implemented this project: https://github.com/yaylas/AndroidFaceRecognizer into Android Studio - to my own App. I included OpenCV using tutorial: https://www.youtube.com/watch?v=OTw_GIQNbD8 (this is static initialization) and I created jni folder in src/main and I put these files https://github.com/yaylas/AndroidFaceRecognizer/tree/master/jni into it. This is Android.mk from this folder: LOCAL_PATH := $(call my-dir) include $(CLEAR_VARS) OPENCV_CAMERA_MODULES:=on OPENCV_INSTALL