shader

About converting YUV(YV12) to RGB with GLSL for iOS

落爺英雄遲暮 提交于 2019-12-11 13:32:00
问题 I'm trying to convert YUV(YV12) to RGB with GLSL shader. As below step. read a raw YUV(YV12) data from image file filtering Y, Cb and Cr from the raw YUV(YV12) data mapping texture send Fragment Shader. but result image is not same as raw data. below image is raw data. screenshot of raw image link(Available for download) and below image is convert data. screenshot of convert image link(Available for download) and below is my source code. - (void) readYUVFile { ... NSData* fileData = [NSData

iOS Metal Shader - Texture read and write access?

那年仲夏 提交于 2019-12-11 12:40:26
问题 I'm using a metal shader to draw many particles onto the screen. Each particle has its own position (which can change) and often two particles have the same position. How can I check if the texture2d I write into does not have a pixel at a certain position yet? (I want to make sure that I only draw a particle at a certain position if there hasn't been drawn a particle yet, because I get an ugly flickering if many particles are drawn at the same positon) I've tried outTexture.read

Back face culling for linestrips

邮差的信 提交于 2019-12-11 10:58:20
问题 I have circle in 3D space (red on image) with normals (white) This circle is being drawn as linestrip. Problem is: i need to draw only those pixels whose normals directed into camera (angle between normal and camera vector is < 90) using discard in fragment shader code. Like backface culling but for lines. Red part of circle is what i need to draw and black is what i need to discard in fragment shader. Good example is 3DS Max rotation gizmo, back sides of lines are hidden: So, in fragment

Problems compiling shader source within osg application

岁酱吖の 提交于 2019-12-11 10:55:09
问题 I have an OSG application that I want to texture map a full screen quad in the finalDrawCallback because I need everything in my scene to be rendered before the texturing is done. This is why I have to use the openGL calls instead of the osg calls for the program and shaders to execute. Specifically I seem to have an issue with compiling both the vert and frag shaders. When I call glGetShaderiv(shader, GL_COMPILE_STATUS, &param ), my param value doesn't change or is undefined. Which,

Please tell VertexShader Error Solution

↘锁芯ラ 提交于 2019-12-11 10:18:42
问题 Students that use OpenGL. Do not speak English well. So please understand. There is currently a problem #version 400 layout (location = 0) in vec3 VertexPosition; layout (location = 1) in vec3 VertexNormal; layout (location = 2) in mat4 instance_ModelMatrix [3]; The VertexShader Code. Code above , but it is run layout (location = 2) in mat4 instance_ModelMatrix [3]; -> layout (location = 2) in mat4 instance_ModelMatrix [4]; With this changing run Attribute instance_ModelMatrix is ​​a matrix

Embedding cg shaders in C++ GPGPU library

笑着哭i 提交于 2019-12-11 09:52:35
问题 I'm writing a GPGPU Fluid simulation, which runs using C++/OpenGL/Cg. At the moment, the library requires that the user specify a path to the shaders, which is will then read it from. I'm finding it extremely annoying to have to specify that in my own projects and testing, so I want to make the shader contents linked in with the rest. Ideally, my .cg files would still be browsable seperately, but a post-build step or pre-processor directive would include it in the source when required. To

Making shader takes vertexArray as canvas instead of the window

☆樱花仙子☆ 提交于 2019-12-11 09:29:09
问题 Currently developing an open source GUI SFML based library, but struggling a bit with shaders. So far I've managed to render shader with no problem, but instead of applying the shader on the current object, it's apply it for the whole window, but only visible in the object range. Here's my simple code: #include <SFML/Window.hpp> #include <SFML/Graphics.hpp> #include <iostream> const char *glsl = R"( #version 330 core uniform vec2 u_resolution; void main() { vec2 pos = gl_FragCoord.xy / u

What's wrong with this shader for a centered zooming effect in Orthographic projection?

若如初见. 提交于 2019-12-11 09:10:22
问题 I've created a basic orthographic shader that displays sprites from textures. It works great. I've added a "zoom" factor to it to allow the sprite to scale to become larger or smaller. Assuming that the texture is anchored with its origin in the "lower left", what it does is shrink towards that origin point, or expand from it towards the upper right. What I actually want is to shrink or expand "in place" to stay centered. So, one way of achieving that would be to figure out how many pixels I

4 float color to one float in java, and back again in openGL ES 2.0 shader

倾然丶 夕夏残阳落幕 提交于 2019-12-11 08:49:29
问题 I am trying to send one color with every vertex that goes in to the shader, but in only one float value. I think is weird that you cannot send 4 bytes as attributes with every vertex, but sense it's not possible I am going to try to pack RGBA in a single float variable, so this is my code: Jave code (that packs the values in one float): private float fourfColor2One(float r, float g, float b, float a) { long temp = (byte) (r * 255); float res = temp << 24; temp = (byte) (g * 255); res += temp

glsl shader - color blend, normal mode (like in Photoshop)

孤街浪徒 提交于 2019-12-11 08:35:40
问题 Im trying to create effect of blending 2 colors (actually image and color as overlay over image), like in photoshop "Color Overlay" with "Normal Blending" mode Im using libgdx This is what i have so far attribute vec4 a_position; attribute vec4 a_color; attribute vec2 a_texCoord0; uniform mat4 u_projTrans; varying vec4 v_color; varying vec2 v_texCoords; void main() { v_color = a_color; v_texCoords = a_texCoord0; gl_Position = u_projTrans * a_position; } And fragment #ifdef GL_ES #define LOWP