OpenCV

cv2.imdecode() returns None from image in base64, mimetype image/jpeg received via Websockets

天大地大妈咪最大 提交于 2021-01-29 17:50:25
问题 I use websockets to receive video frames. Image is encoded in base64 mimetype image/jpeg. I'm trying to convert image to np.ndarray. When I read image file code works correct. But when I'm trying to read image from socket stream, issue is occured. image_data = base64.b64decode(part.encoded_image) np_array = np.frombuffer(image_data, np.uint8) image = cv2.imdecode(np_array, cv2.IMREAD_UNCHANGED) In docs cv2.imdecode() returns None in case when image is short or corrupted. My image is in HD

OpenCV-(-215:Assertion failed) _src.total() > 0 in function 'cv::warpPerspective'

余生颓废 提交于 2021-01-29 15:35:57
问题 My full code: import cv2 as cv import numpy as np cap = cv.VideoCapture(0 + cv.CAP_DSHOW) imgTarget = cv.imread('photos\TargetImage.jpg') #bu resmimiz myVid = cv.VideoCapture('photos\video.mp4') detection = False frameCounter = 0 imgVideo = cap.read() success, imgVideo = myVid.read() hT,wT,cT = imgTarget.shape #burada resmimizin yüksekliğini, kalınlığını genişliğini falan alıyoruz. '''while myVid.isOpened(): success, Video = myVid.read() if success: Video = cv.resize(Video, (wT, hT))'''

How to merge nearby Lines in HoughlineP opencv

十年热恋 提交于 2021-01-29 14:58:18
问题 I am trying to detect crop-rows using Segmentation and HoughLines and I have a script from github that I am trying to modify. A merging function was applied after the HoughLines to merge lines that are close to each other based on distance But I don't seems to understand the reason for that. from what I can tell multiple lines where detected for individual crop-row even after varying the HoughLine parameters. So merging the lines was a way to optimize the result of the HoughLine process. def

Manage a text detector which is very sensitive to lighting conditions

旧巷老猫 提交于 2021-01-29 14:56:46
问题 I am using a Text Detection called CRAFT (you can check it out in github) which does a good job on major images I have used, but I have noticed that the text detection is very sensitive to lighting conditions. To ilustrate this, see this image: Text detected with CRAFT I am interested in detecting the code part, which is: FBIU0301487. However, it seems that the caracter 'F' cannot be detected even using a threshold equals to zero, i.e. let every bounding box be consired as a valid detection.

EmguCV show Mat in Image tag C# WPF

半城伤御伤魂 提交于 2021-01-29 14:50:05
问题 Is there a way to show a Mat object inside a WPF image tag in c#? Mat image= CvInvoke.Imread(op.FileName, Emgu.CV.CvEnum.ImreadModes.AnyColor); Also, is there a way to draw on the image inside the canvas/image tag, instead of the new window that Imshow opens? CvInvoke.Imshow("ponaredek", slika); And lastly, for a begginer, which is better to use? EmguCV or regular openCV? 回答1: If you want to show the image that you read in inside of your WPF, than you could use an image source, which is found

No module named 'cv2' error in new environment

折月煮酒 提交于 2021-01-29 14:16:21
问题 I have tried to rectify this issue by using pip install opencv-python pip install opencv-contrib-python pip uninstall panda pip install panda conda install opencv-python Some info is that im currently using python 3.6.10 and Windows 10. opencv-python 4.2.0.32 numpy 1.18.1 panda 0.3.1 tensorflow-gpu 1.14.0 I created a new env but cant seem to import cv2 over on Jupyter Notebook. My earlier environment was able to do so. When i tried to pip install the opencv-python==4.1.2.30 (from the old

What could make undistortion code not work for several chessboard images having the same dimensions?

柔情痞子 提交于 2021-01-29 14:01:42
问题 I am beginner in OpenCV-Python. I would like to know what could make my undistortion code work for this chessboard picture and not work for this one knowing that the too chessboards have the same dimension. The code i am talking about is as below import cv2 import numpy as np criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, 30, 0.001) cbrow = 7 cbcol = 9 objp = np.zeros((cbrow * cbcol, 3), np.float32) objp[:, :2] = np.mgrid[0:cbcol, 0:cbrow].T.reshape(-1,2) objpoints = []

In opencv using houghlines prints only one line

廉价感情. 提交于 2021-01-29 13:55:58
问题 I started following some tutorials on opencv and working on houghlines, and noticed that what ever image I give it would only return one line! I use opencv 4.2.0, and my code is: import cv2 import numpy as np image =cv2.imread("sudoku.jpg") gray=cv2.cvtColor(image, cv2.COLOR_BGR2GRAY) edges=cv2.Canny(gray, 100, 170,apertureSize=3) cv2.imshow(" lines",edges) cv2.waitKey() cv2.destroyAllWindows() lines=cv2.HoughLines(edges, 1, np.pi/180, 240) for rho,theta in lines[0]: a=np.cos(theta) b=np.sin

OpenCV Brisk algorithm working with other templates but not with this one

青春壹個敷衍的年華 提交于 2021-01-29 13:31:27
问题 I've been told that template matching doesn't ignore the background and doesn't use alpha channel information. I decided to try to use BRISK algorithm which has scale invariance and rotation invariance. This is the code I'm using public boolean runBRISK(String filename1, String filename2) { BRISK detectorAndExtractor = BRISK.create(); final MatOfKeyPoint keyPointsLarge = new MatOfKeyPoint(); final MatOfKeyPoint keyPointsSmall = new MatOfKeyPoint(); Mat largeImage = Imgcodecs.imread(filename1,

finding edge in tilted image with Canny

我们两清 提交于 2021-01-29 13:25:02
问题 I'm trying to find the tilt angle in a series of images which look like the created example data below. There should be a clear edge which is visible by eye. However I'm struggling in extracting the edges so far. Is Canny the right way of finding the edge here or is there a better way of finding the edge? import cv2 as cv import numpy as np import matplotlib.pyplot as plt from scipy.ndimage.filters import gaussian_filter # create data xvals = np.arange(0,2000) yvals = 10000 * np.exp((xvals -