问题
I am using Convolution Matrix for my android app for making image Emboss.
i have defined the class for it as:
public class ConvolutionMatrix {
public static final int SIZE = 3;
public double[][] Matrix;
public double Factor = 1;
public double Offset = 1;
public ConvolutionMatrix(int size) {
Matrix = new double[size][size];
}
public void setAll(double value) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = value;
}
}
}
public void applyConfig(double[][] config) {
for (int x = 0; x < SIZE; ++x) {
for (int y = 0; y < SIZE; ++y) {
Matrix[x][y] = config[x][y];
}
}
}
public static Bitmap computeConvolution3x3(Bitmap src,
ConvolutionMatrix matrix) {
int width = src.getWidth();
int height = src.getHeight();
Bitmap result = Bitmap.createBitmap(width, height, src.getConfig());
int A, R, G, B;
int sumR, sumG, sumB;
int[][] pixels = new int[SIZE][SIZE];
for (int y = 0; y < height - 2; ++y) {
for (int x = 0; x < width - 2; ++x) {
// get pixel matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
pixels[i][j] = src.getPixel(x + i, y + j);
}
}
// get alpha of center pixel
A = Color.alpha(pixels[1][1]);
// init color sum
sumR = sumG = sumB = 0;
// get sum of RGB on matrix
for (int i = 0; i < SIZE; ++i) {
for (int j = 0; j < SIZE; ++j) {
sumR += (Color.red(pixels[i][j]) * matrix.Matrix[i][j]);
sumG += (Color.green(pixels[i][j]) * matrix.Matrix[i][j]);
sumB += (Color.blue(pixels[i][j]) * matrix.Matrix[i][j]);
}
}
// get final Red
R = (int) (sumR / matrix.Factor + matrix.Offset);
if (R < 0) {
R = 0;
} else if (R > 255) {
R = 255;
}
// get final Green
G = (int) (sumG / matrix.Factor + matrix.Offset);
if (G < 0) {
G = 0;
} else if (G > 255) {
G = 255;
}
// get final Blue
B = (int) (sumB / matrix.Factor + matrix.Offset);
if (B < 0) {
B = 0;
} else if (B > 255) {
B = 255;
}
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
}
}
// final image
return result;
}
}
It is giving me proper result but it takes too much time for calculating the result. Is there any way to make calculation faster and work efficiently?
回答1:
Take a look at: Convolution Demo. It is an App, which compares a convolution implementation done in Java vs in C++. Needless to say C++ variant runs more than 10x faster.
So if you want speed either implement it via NDK or via Shaders.
回答2:
The core of your slowdown is:
// apply new pixel
result.setPixel(x + 1, y + 1, Color.argb(A, R, G, B));
That's a reasonable amount of work each iteration to set each pixel pixel by pixel, they aren't free in the bitmap class. It's far better to call the getPixels() routine and mess with the raw pixels there then put them back, just one time when you're done.
You could also hardcode the emboss (most of the time you're grabbing a bunch of data and multiplying it by zero with that kernel, you easily cheat and grab like the three pixels you care about.
private static int hardEmboss(int[] pixels, int stride, int index, int[][] matrix, int parts) {
//ignoring the matrix
int p1 = pixels[index];
int p2 = pixels[index + stride + 1];
int p3 = pixels[index + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
return 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
Assuming your emboss kernel is:
int[][] matrix = new int[][]{
{2, 0, 0},
{0, -1, 0},
{0, 0, -1}
};
Also, unbeknownst to most everybody there's a critical flaw in the standard convolution algorithm, where the return the results pixel to the center is in error. If you return it to the upper left hand corner you can simply process all the data in the same memory footprint going left to right, top to bottom in a scanline operation.
public static int crimp(int v) { return (v > 255)?255:((v < 0)?0:v); }
public static void applyEmboss(int[] pixels, int stride) {
//stride should be equal to width here, and pixels.length == bitmap.height * bitmap.width;
int pos;
pos = 0;
try {
while (true) {
int p1 = pixels[pos];
int p2 = pixels[pos + stride + 1];
int p3 = pixels[pos + stride + stride + 2];
int r = 2 * ((p1 >> 16) & 0xFF) - ((p2 >> 16) & 0xFF) - ((p3 >> 16) & 0xFF);
int g = 2 * ((p1 >> 8) & 0xFF) - ((p2 >> 8) & 0xFF) - ((p3 >> 8) & 0xFF);
int b = 2 * ((p1) & 0xFF) - ((p2) & 0xFF) - ((p3) & 0xFF);
pixels[pos++] = 0xFF000000 | ((crimp(r) << 16) | (crimp(g) << 8) | (crimp(b)));
}
}
catch (ArrayIndexOutOfBoundsException e) { }
}
The disadvantage is that the pixel appears to shift left and up by 1 pixel. Though, if you do another scanline fill backwards you could shift them back. And all the garbage here will end up as 2 rows on the right and bottom sides (some of this will be filled with embossed nonsense because I didn't have it slow down to check for those places). This means that if you want to cut that off when you readd the pixels reduce the height and width by 2, and leave the stride at the size of the original width. Since all the good data will be in the top bit, you don't have to fiddle at all with the offset.
Also, just use renderscript.
来源:https://stackoverflow.com/questions/14291524/how-to-optimize-convolution-matrix-in-android