opengl-3

Shadow mapping: project world-space pixel to light-space

喜欢而已 提交于 2019-12-11 16:15:29
问题 I'm writing shadow mapping in deferred shading. Here is my depth map for directional light (orthogonal projection): Below is my full-screen quad shader to render pixel's depth in light view space: #version 330 in vec2 texCoord; out vec3 fragColor; uniform mat4 lightViewProjMat; // lightView * lightProj uniform sampler2D sceneTexture; uniform sampler2D shadowMapTexture; uniform sampler2D scenePosTexture; void main() { vec4 fragPos = texture(scenePosTexture, texCoord); vec4 fragPosLightSpace =

Problems outputting gl_PrimitiveID to custom frame buffer object (FBO)

我们两清 提交于 2019-12-11 06:23:47
问题 I have a very basic fragment shader which I want to output 'gl_PrimitiveID' to a fragment buffer object (FBO) which I have defined. Below is my fragment shader: #version 150 uniform vec4 colorConst; out vec4 fragColor; out uvec4 triID; void main(void) { fragColor = colorConst; triID.r = uint(gl_PrimitiveID); } I setup my FBO like this: GLuint renderbufId0; GLuint renderbufId1; GLuint depthbufId; GLuint framebufId; // generate render and frame buffer objects glGenRenderbuffers( 1,

GL_CULL_FACE makes all objects disappear

我的梦境 提交于 2019-12-11 04:02:51
问题 I am trying to create some simple polygons in openGL3.3. I have 2 types of objects with the following properties : Object 1 - 10 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use GL_TRIANGLE_FAN v x y z w v 0.0 0.0 1.0 1.0 v 0.0 1.0 0.1 1.0 v 0.71 0.71 0.1 1.0 v 1.0 0.0 0.1 1.0 v 0.71 -0.71 0.1 1.0 v 0.0 -1.0 0.1 1.0 v -0.71 -0.71 0.1 1.0 v -1.0 0.0 0.1 1.0 v -0.71 0.71 0.1 1.0 v 0.0 1.0 0.1 1.0 Object 2 - 4 vertices (listed below, in order) stored in GL_ARRAY_BUFFER and use

Why does this GLSL shader work fine with a GeForce but flickers strangely on an Intel HD 4000?

天涯浪子 提交于 2019-12-10 23:49:18
问题 Using OpenGL 3.3 core profile, I'm rendering a full-screen "quad" (as a single oversized triangle) via gl.DrawArrays(gl.TRIANGLES, 0, 3) with the following shaders. Vertex shader: #version 330 core #line 1 vec4 vx_Quad_gl_Position () { const float extent = 3; const vec2 pos[3] = vec2[](vec2(-1, -1), vec2(extent, -1), vec2(-1, extent)); return vec4(pos[gl_VertexID], 0, 1); } void main () { gl_Position = vx_Quad_gl_Position(); } Fragment shader: #version 330 core #line 1 out vec3 out_Color;

Using OpenGL 3.2 with Derelict3 and GLFW 3 on OSX Lion

£可爱£侵袭症+ 提交于 2019-12-10 19:23:45
问题 I'm having trouble getting OpenGL 3.2 to run on Lion (osx 10.7.4) using Derelict3 and GLFW 3. Here's my test program: module glfw3Test; import std.stdio, std.conv; import derelict.glfw3.glfw3; import derelict.opengl3.gl3; string programName = "glfw3Test"; int width = 640; int height = 480; GLFWwindow window; void main() { // load opengl DerelictGL3.load(); // load GLFW DerelictGLFW3.load(); if(!glfwInit()) { glfwTerminate(); throw new Exception("Failed to create glcontext"); } writefln("GLFW:

How many users could run software that uses OpenGL 3.x?

偶尔善良 提交于 2019-12-10 18:22:14
问题 Can I expect users to be able to run software that uses OpenGL 3.x? Can Linux users who have open-source graphics drviers run OpenGL 3.x? I know that Mesa3D 7.8 only supports OpenGL 2.1. I also know that OS X Snow Leopard supports some but not all OpenGL 3.0 features. I don't know the situation on Leopard. I don't know the situation on XP, Vista, and Windows 7. I'd like to start learning OpenGL, and my interest lies more in scientific and engineering applications than games. I know I'll be

Mesa + Linux : gl.h does not contain modern OpenGL

大兔子大兔子 提交于 2019-12-10 18:13:52
问题 This is the environment I currently use: Eclipse-Luna, C++11 on Linux Mint -Rebecca. When I try to use modern OpenGL like with VAOs or VBOs I get Compiler Errors such that methods could not be resolved. For Example: GLuint VaoID; //GLuint is working glGenVertexArrays(1, &VaoID); or: GLuint VboID; glGenBuffers(1, &VboID); glBindBuffer(GL_ARRAY_BUFFER, VboID); glBufferData(GL_ARRAY_BUFFER, vbo_size, data, usage); I checked the GL/gl.h, GL/glext.h and noticed that I have only got OpenGL 1.x

How is explicit multisampling different from regular multisampling in OpenGL

廉价感情. 提交于 2019-12-10 16:47:15
问题 I was reading this tutorial on MSAA in deferred shading, from 28byteslater.com. It says that in explicit multisampling we can access a particular sample. Could not we do the same from a regular texture that was bound to GL_TEXTURE_2D_MULTISAMPLE for instance? This was the shader code that I used to use earlier for accessing individual samples (without using explict multisampling): uniform sampler2DMS Diffuse; ivec2 Texcoord = ivec2(textureSize(Diffuse) * In.Texcoord); vec4 colorFirstSample =

Why do I get an sRGB framebuffer only when I'm setting a non-zero alpha size in SDL2?

匆匆过客 提交于 2019-12-10 14:26:25
问题 I'm trying to render the typical OpenGL color triangle in a gamma correct way by following this guide and consulting the SDL2 documentation on how to enable SRGB support on the default framebuffer. This is the code I've written, which draws the triangle: #include <SDL.h> // Header file generated with glLoadGen #include "gl_core_3_3.h" #include <cstdlib> void sdl_loop(SDL_Window* window); static const char* const vertexSource = R"( #version 330 in vec2 position; in vec3 color; out vec3 vs

Using different texture types in same texture unit at the same time in shader

╄→гoц情女王★ 提交于 2019-12-10 13:12:04
问题 I came across a nasty problem in my program when i tried to use the same texture unit (number 0) for different texture types (i.e. a normal 2D texture and a cube map) in my shader. It appeared so that the GL issues a 502H (Invalid Operation) after the first glDrawArrays call. In my application code i load up the textures to different texture targets: void setup_textures() { unsigned int width, height; int components; unsigned int format; float param[8]; vector<unsigned char> pngData; GLenum