OpenCV will not load a big image (~4GB)

北城以北 提交于 2020-01-03 08:27:12

问题


I'm working on a program that is to detect colored ground control points from a rather large image. The TIFF image is some 3 - 4 GB (aboud 35 000 x 33 000 pix). I am using Python 2, and OpenCV to do the image processing.

import cv2
img = 'ortho.tif'
I = cv2.imread(img, cv2.IMREAD_COLOR)

This part does not (always) produce an error message. While showing the image does:

cv2.imshow('image', I)

I have also tried showing the image by using matplotlib:

plt.imshow(I[:, :, ::-1])  # Hack to change BGR to RGB

Is there any limitation on OpenCV or Python regarding large images? What would you suggest to get this iamge loaded?

PS: The computer I do this work on is a Windows 10 "workstation" (It has enough horsepowers to deal with the image).

In advance, thanks for your help :)


回答1:


The implementation of imread():

Mat imread( const string& filename, int flags )
{
    Mat img;
    imread_( filename, flags, LOAD_MAT, &img );
    return img;
}

This allocates the matrix corresponding to load an image as a contiguous array. So this depends (at least partly) on your hardware performance: your machine must be able to allocate 4 GB contiguous RAM array (if you're on a Debian distro, you may check your RAM size by running, for example, vmstat -s -SM).

By curiosity, I tried to get a contiguous memory array (a big one, but less than the one your 4 GB image requires) using ascontiguousarray, but before that, I already stumbled on a memory allocation problem:

>>> img = numpy.zeros(shape=(35000,35000))
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
MemoryError
>>>

In practice, even if you have enough RAM, it is not a good idea to manipulate the pixels of a 4 GB RAM image and you will need to split it anyway in terms of regions of interests, smaller areas and may be channels too, depending on the nature of the operations you want to perform on the pixels.

EDIT 1:

As I said in my comment below your answer, if you have 16GB of RAM and you're able to read that image with scikit then there is no reason you can not do the same with OpenCV.

Please give this a try:

import numpy as np # Do not forget to import numpy
import cv2    
img = cv2.imread('ortho.tif')

You forgot to import Numpy in your original code and that is why OpenCV obviously failed to load the image. All the OpenCV array structures are converted to-and-from Numpy arrays and the image you read are represented by OpenCV as arrays in the memory.

EDIT 2:

OpenCV can deal with imaes which size is up to 10 GB. But this is true when it comes cv2.imwrite() function. For cv2.imread(), however, the size of the image to read is much smaller: that is a bug announced on September 2013 (Issue3258 #1438) which is still, AFAIK, not fixed.




回答2:


It turns out that scikit-image came to the rescue, which I found out from here.

The following let me load the image into a python session:

import numpy as np
from skimage.io import imread

img = imread(path_to_file)

It took about half a minute, or so, to load.



来源:https://stackoverflow.com/questions/35666761/opencv-will-not-load-a-big-image-4gb

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!