alphablending

How to do alpha compositing with a list of RGBA data in numpy arrays?

谁都会走 提交于 2021-01-01 07:33:48
问题 Following this formula for alpha blending two color values, I wish to apply this to n numpy arrays of rgba image data (though the expected use-case will, in practice, have a very low upper bound of arrays, probably > 5). In context, this process will be constrained to arrays of identical shape. I could in theory achieve this through iteration, but expect that this would be computationally intensive and terribly inefficient. What is the most efficient way to apply a function between two

Handling alpha channel in WPF pixel shader effect

一曲冷凌霜 提交于 2020-12-29 05:53:27
问题 Is there something unusual about how the alpha component is handled in a pixel shader? I have a WPF application for which my artist is giving me grayscale images to use as backgrounds, and the application colorizes those images according to the current state. So I wrote a pixel shader (using the WPF Pixel Shader Effects Library infrastructure) to use as an effect on an Image element. The shader takes a color as a parameter, which it converts to HSL so it can manipulate brightness. Then for

Want transparent image even after blending

情到浓时终转凉″ 提交于 2020-01-20 09:35:07
问题 I am trying to blend two images as shown here. This is my whole code #include <cv.h> #include <highgui.h> #include <iostream> using namespace cv; int main( int argc, char** argv ) { double beta; double input; Mat src1, src2, dst; /// Ask the user enter alpha std::cout<<" Simple Linear Blender "<<std::endl; std::cout<<"-----------------------"<<std::endl; src1 = imread("face.jpg"); src2 = imread("necklace1.png"); if( !src1.data ) { printf("Error loading src1 \n"); return -1; } if( !src2.data )

OpenGL ES (IPhone) alpha blending looks weird

拈花ヽ惹草 提交于 2020-01-20 02:27:26
问题 I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending: I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions. I attached an image. In

OpenGL ES (IPhone) alpha blending looks weird

别说谁变了你拦得住时间么 提交于 2020-01-20 02:27:05
问题 I'm writing a game for IPhone in Opengl ES, and I'm experiencing a problem with alpha blending: I'm using glBlendFunc(Gl.GL_SRC_ALPHA, Gl.GL_ONE_MINUS_SRC_ALPHA) to achieve alpha blending and trying to compose a scene with several "layers" so I can move them separately instead of having a static image. I created a preview in photoshop and then tried to achieve the same result in the iphone, but a black halo is shown when I blend a texture with semi-transparent regions. I attached an image. In

How do you do both blending and transparency using just pure VCL on bitmapped images?

做~自己de王妃 提交于 2020-01-16 04:31:38
问题 Similar to this question I would like to both blend colors, and bitmaps (png or bmp, but in my case I'm using a png) while preserving transparency. As with the linked question, I would like to (a) not use third party libraries, (b) use VCL built in techniques where possible but with recourse to the Win32 GDI APIs where needed, and (c) not use GDI+. This simple code, based on the code in the linked question I see that the color blending works but the PNG file's transparency is not preserved,

How does alpha blending work, mathematically, pixel-by-pixel?

流过昼夜 提交于 2020-01-10 05:54:48
问题 Seems like it's not as simple as RGB1*A1 + RGB2*A2...how are values clipped? Weighted? Etc. And is this a context-dependent question? Are there different algorithms, that produce different results? Or one standard implementation? I'm particularly interested in OpenGL-specific answers, but context from other environments is useful too. 回答1: I don't know about OpenGL, but one pixel of opacity A is usually drawn on another pixel like so: result.r = background.r * (1 - A) + foreground.r * A

How do color and textures work together?

你离开我真会死。 提交于 2020-01-07 00:57:08
问题 I must be asking a very basic question. I've just learnt how to apply textures. Basically, I have a scene (a plane) and a cube on it. I apply a texture to one of the faces of the cube. The face of the cube I am trying to apply the texture to is red, but I want the texture color to override it, but they somehow blend together, although I have not enabled blending, nor is the texture image transparent ! Here's my texture(.png). And here's the rendering: Here are some relevant parts of my code (

LibGDX texture blending with OpenGL blending function

扶醉桌前 提交于 2020-01-01 18:59:12
问题 In libGdx, i'm trying to create a shaped texture: Take a fully-visible rectangle texture and mask it to obtain a shaped textured, as shown here: Here I test it on rectangle, but i will want to use it on any shape. I have looked into this tutorial and came with an idea to first draw the texture, and then the mask with blanding function: batch.setBlendFunction(GL20.GL_ZERO, GL20.GL_SRC_ALPHA); GL20.GL_ZERO - because i really don't want to paint any pixels from the mask GL20.GL_SRC_ALPHA - from

OpenCV (Emgu.CV) — compositing images with alpha

早过忘川 提交于 2019-12-30 03:20:07
问题 I'm using Emgu.CV to perform some basic image manipulation and composition. My images are loaded as Image<Bgra,Byte> . Question #1: When I use the Image<,>.Add() method, the images are always blended together, regardless of the alpha value. Instead I'd like them to be composited one atop the other, and use the included alpha channel to determine how the images should be blended. So if I call image1.Add(image2) any fully opaque pixels in image2 would completely cover the pixels from image1,