I would like to be able to draw lines into numpy arrays to get off-line features for on-line handwriting recognition. This means I don\'t need the image at all, but I need f
I've found the val * 255 approach in the answer suboptimal, because it seems to work correctly only on black background. If the background contains darker and brighter regions, this does not seem quite right:
To make it work correctly on all backgrounds, one has to take the colors of the pixels that are covered by the anti-aliased line into account.
Here is a little demo that builds on the original answer:
from scipy import ndimage
from scipy import misc
from skimage.draw import line_aa
import numpy as np
img = np.zeros((100, 100, 4), dtype = np.uint8) # create image
img[:,:,3] = 255 # set alpha to full
img[30:70, 40:90, 0:3] = 255 # paint white rectangle
rows, cols, weights = line_aa(10, 10, 90, 90) # antialias line
w = weights.reshape([-1, 1]) # reshape anti-alias weights
lineColorRgb = [255, 120, 50] # color of line, orange here
img[rows, cols, 0:3] = (
np.multiply((1 - w) * np.ones([1, 3]),img[rows, cols, 0:3]) +
w * np.array([lineColorRgb])
)
misc.imsave('test.png', img)
The interesting part is
np.multiply((1 - w) * np.ones([1, 3]),img[rows, cols, 0:3]) +
w * np.array([lineColorRgb])
where the new color is computed from the original color of the image, and the color of the line, by linear interpolation using the values from anti-alias weights. Here is a result, orange line running over two kinds of background:
Now the pixels that surround the line in the upper half become darker, whereas the pixels in the lower half become brighter.