scikit-image

ImportError: cannot import name _fblas on Mac

这一生的挚爱 提交于 2019-12-13 01:54:06
问题 I am semi-new to python and I am trying to use scikit-image, but as soon as I try to import, I get errors. I have gotten past a few of them, but I am stuck on this one when I try to import: In [8]: from skimage import data, io, filter --------------------------------------------------------------------------- ImportError Traceback (most recent call last) <ipython-input-8-7863d911e191> in <module>() ----> 1 from skimage import data, io, filter /Library/Frameworks/Python.framework/Versions/2.7

scikit-image save image to a bytestring

懵懂的女人 提交于 2019-12-12 17:18:36
问题 I'm using scikit-image to read an image: img = skimage.io.imread(filename) After doing some manipulations to img , I'd like to save it to an in-memory file (a la StringIO) to pass off to another function, but it looks like skimage.io.imsave requires a filename, not a file handle. I'd like to avoid hitting the disk ( imsave followed by read from another imaging library) if at all possible. Is there a nice way to get imsave (or some other scikit-image-friendly function) to work with StringIO ?

Fast way to convert rgb to lab in python

杀马特。学长 韩版系。学妹 提交于 2019-12-12 16:25:05
问题 Is there a quick way to convert RGB to LAB in Python3 using D50 sRGB? Python-Colormath too slow skimage use D65 回答1: For now, the white reference in skimage cannot be passed as a parameter (pull request welcome), but here is a workaround: from skimage import color color.colorconv.lab_ref_white = np.array([0.96422, 1.0, 0.82521]) lab = color.rgb2lab(image) 回答2: Stefan van der Walt's answer was correct at the time, but for anyone who still has the same question and finds this page: as of scikit

image read through skimage.io.imread have suspicious shape

风流意气都作罢 提交于 2019-12-12 15:19:09
问题 I am trying to read an RGB image using the skimage.io.imread. But after reading the image, I found that the image shape is wrong, print(img.shape) shows that the image shape is (2,) . The complete code to show the problem is: from skimage import io img = io.imread(path/to/the/image) print(img.shape) I also tried to read the image using opencv's python package, the returned shape is correct (height*width*3). The skimage version used is 0.12.3 , can someone explain is there anything wrong with

Sliding window in Python for GLCM calculation

*爱你&永不变心* 提交于 2019-12-12 09:57:23
问题 I am trying to do texture analysis in a satellite imagery using GLCM algorithm. The scikit-image documentation is very helpful on that but for GLCM calculation we need a window size looping over the image. This is too slow in Python. I found many posts on stackoverflow about sliding windows but the computation takes for ever. I have an example shown below, it works but takes forever. I guess this must be a a naive way of doing it image = np.pad(image, int(win/2), mode='reflect') row, cols =

What is a good way to get a similarity measure of two images that contain a line chart?

天涯浪子 提交于 2019-12-12 09:12:44
问题 I have tried the dHash algorithm which is applied on each image, then a hamming_distance is calculated on both hashes, the lower the number, the higher the similarity. from PIL import Image import os import shutil import glob from plotData import * def hamming_distance(s1, s2): #Return the Hamming distance between equal-length sequences if len(s1) != len(s2): raise ValueError("Undefined for sequences of unequal length") return sum(ch1 != ch2 for ch1, ch2 in zip(s1, s2)) def dhash(image, hash

Why python raise a runtime error while i run numpy.percentile for equalization by scikit-image?

让人想犯罪 __ 提交于 2019-12-12 06:38:05
问题 I take the equalization code from here import numpy as np from skimage import morphology from skimage import color from skimage import io from matplotlib import pyplot as plt from skimage import data, img_as_float from skimage import exposure img = color.rgb2gray(io.imread(path)) # Contrast stretching p2 = np.percentile(img, 2) p98 = np.percentile(img, 98) #img_rescale = exposure.rescale_intensity(img, in_range=(p2, p98)) img_rescale = exposure.rescale_intensity(img, out_range=(0, 255)) #

Python- Clustering Hough lines

ぃ、小莉子 提交于 2019-12-12 04:50:55
问题 I am working to cluster probabilistic hough lines together using unit vectors. The clustering changes every run though and is not quite right. I want to cluster the lines of [this image][2]. But I am getting this clustering and it changes drastically every run though. I know the probabilistic hough changes slightly every run but I would like the keep the merged lines pretty consistent. Is the problem with the way I am calculating unit-vector or DBSCAN or is there a better way to do clustering

Extracting specific objects from an image

牧云@^-^@ 提交于 2019-12-11 19:03:47
问题 Given the dataset of the object, I would like to extract that object from an image. The object is leaf in my case. It is easy in these kind of situation where there is only one big leaf in front of the camera. This can be done using the edge detected version of this picture as suggested in this answer as we are getting somewhat clear edge of what we want as output. for reference : But how can I extract all the leaves from an image in which there are a lot of such leaves. for example : for

KeyError: class 'numpy.object_' while downloading image dataset using imread

岁酱吖の 提交于 2019-12-11 15:10:43
问题 I am trying to download images from URLs using imread. After downloading about 700 images, I see KeyError: class 'numpy.object_' . I am really not familiar with numpy and Conda libraries. Any help would be appreciated for i in range(len(classes)): if not os.path.exists(saved_dirs[i]): os.mkdir() saved_dir = saved_dirs[i] for url in urls[i]: # print(url) img = io.imread(url) saved_path = os.path.join(saved_dir, url[-20:]) io.imsave(saved_path, img) Trigger URL: https://requestor-proxy.figure