shader

Libgdx how to use shader in 3D

点点圈 提交于 2019-11-29 00:36:42
I reached a point in my wip game, where I want to make it more eye-appealing. Currently I add some Ambientlight and a Directionla-light to an Environment and render my scene with it. But now I want to add a custom Shader to it. So I have been looking for some tutorials and for some reason in almost every tutorial they have used another "version" of using Shader in their game: Giving the ModelBatch a String or FileHandle vertex/fragment -shader Creating a ShaderProgram with vertex and fragment Shader . Creating a new DefaultShader with this vertex and fragment Shader . Creating a class, which

glGetAttribLocation returns -1 when retrieving existing shader attribute

这一生的挚爱 提交于 2019-11-28 23:34:16
I'm trying to pass in attributes to my vertex shader but for some reason it keeps giving me a -1 on the 3rd attribute location I'm asking openGl to retrieve via glGetAttribLocation(). Currently it keeps giving me a -1 for the texCoord attribute and if I switch the texAttrib and colAttrib around (switching the lines in code) it gives me a -1 on the color property instead of the texture and I have no idea why? Since a -1 is being passed to glVertexAttribPointer I get the 1281 OpenGL error: GL_INVALID_VALUE. My vertex shader: #version 150 in vec3 position; in vec3 color; in vec2 texcoord; out

Can anyone explain what this GLSL fragment shader is doing?

匆匆过客 提交于 2019-11-28 21:58:57
I realise this is a math-centric question but... if you look at this webpage (and have a good graphics card) http://mrdoob.github.com/three.js/examples/webgl_shader.html If you look at the source, you'll notice a scary looking fragment shader. I'm not looking for a detailed explanation, but an idea of the sort of thing that's happening, or the source of information on what exactly is happening here.. I'm not after a guide to GLSL, but info on the maths. I realise this might be better suited to Math StackExchange site but thought I'd try here first... <script id="fragmentShader" type="x-shader

Shader position vec4 or vec3

梦想的初衷 提交于 2019-11-28 21:27:20
I have read some tutorials about GLSL. In certain position attribute is a vec4 in some vec3. I know that the matrix operations need a vec4, but is it worth to send an additional element? Isn't it better to send vec3 and later cast in the shader vec4(position, 1.0)? Less data in memory - it will be faster? Or we should pack an extra element to avoid casting? Any tips what should be better? layout(location = 0) in vec4 position; MVP*position; or layout(location = 0) in vec3 position; MVP*vec4(position,1.0); For vertex attributes, this will not matter. The 4th component is automatically expanded

OSG与Shader的结合使用

隐身守侯 提交于 2019-11-28 20:39:47
目录 1. 概述 2. 固定管线着色 3. 纹理着色 4. 参考 1. 概述 以往在OpenGL中学习渲染管线的时候,是依次按照申请数据、传送缓冲区、顶点着色器、片元着色器这几个步骤编程的。OSG是OpenGL的一些顶层的封装,使用shader的时候看不到这些步骤了,所以有点不习惯。这里我总结了两个最简单的例子。 2. 固定管线着色 OSG一个最简单的示例是展示自带的数据glider.osg: #include <iostream> #include <Windows.h> #include <osgViewer/Viewer> #include <osgDB/ReadFile> using namespace std; int main() { osg::ref_ptr<osg::Group> root= new osg::Group(); string osgPath = "D:/Work/OSGBuild/OpenSceneGraph-Data/glider.osg"; osg::Node * node = osgDB::readNodeFile(osgPath); root->addChild(node); osgViewer::Viewer viewer; viewer.setSceneData(root); viewer.setUpViewInWindow(100,

How much performance do conditionals and unused samplers/textures add to SM2/3 pixel shaders?

痴心易碎 提交于 2019-11-28 18:56:25
We've one pixel shader in HLSL which is used for slightly different things in a few places, and as such has several conditional blocks meaning that complex functionality is omitted in some cases. As well, this means we pass textures as sampler parameters which may not always be used. I have no idea how much of a performance hit these two things add but especially since we support SM2.0 on integrated graphics chips, inefficiencies are an issue. So, does passing a texture in and not using it mean any extra overhead? And does using an if simply act to add a couple of instructions or can it

What can cause glDrawArrays to generate a GL_INVALID_OPERATION error?

纵然是瞬间 提交于 2019-11-28 18:08:32
I've been attempting to write a two-pass GPU implementation of the Marching Cubes algorithm, similar to the one detailed in the first chapter of GPU Gems 3, using OpenGL and GLSL. However, the call to glDrawArrays in my first pass consistently fails with a GL_INVALID_OPERATION . I've looked up all the documentation I can find, and found these conditions under which glDrawArrays can throw that error: GL_INVALID_OPERATION is generated if a non-zero buffer object name is bound to an enabled array or to the GL_DRAW_INDIRECT_BUFFER binding and the buffer object's data store is currently mapped. GL

draw the depth value in opengl using shaders

会有一股神秘感。 提交于 2019-11-28 17:58:15
I want to draw the depth buffer in the fragment shader, I do this: Vertex shader: varying vec4 position_; gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex; position_ = gl_ModelViewProjectionMatrix * gl_Vertex; Fragment shader: float depth = ((position_.z / position_.w) + 1.0) * 0.5; gl_FragColor = vec4(depth, depth, depth, 1.0); But all I print is white, what am I doing wrong? In what space do you want to draw the depth? If you want to draw the window-space depth, you can do this: gl_FragColor = vec4(gl_FragCoord.z); However, this will not be particularly useful, since most of the

深入浅出计算机组成原理:GPU(下)-为什么深度学习需要使用GPU?(第31讲)

柔情痞子 提交于 2019-11-28 17:47:39
一、引子 上一讲,我带你一起看了三维图形在计算机里的渲染过程。这个渲染过程,分成了顶点处理、图元处理、栅格化、片段处理,以及最后的像素操作。这一连串的过程, 也被称之为图形流水线或者渲染管线。 因为要实时计算渲染的像素特别地多,图形加速卡登上了历史的舞台。通过3dFx的Voodoo或者NVidia的TNT这样的图形加速卡, CPU就不需要再去处理一个个像素点的图元处理、栅格化和片段处理这些操作。而3D游戏也是从这个时代发展起来的。 你可以看这张图,这是“古墓丽影”游戏的多边形建模的变化。这个变化,则是从1996年到2016年,这20年来显卡的进步带来的。 二、Shader的诞生和可编程图形处理器 1、无论你的显卡有多快,如果CPU不行,3D画面一样还是不行 不知道你有没有发现,在Voodoo和TNT显卡的渲染管线里面,没有“顶点处理“这个步骤。在当时,把多边形的顶点进行线性变化,转化到我们的屏幕的坐标系的工作还是由CPU完成的。 所以,CPU的性能越好,能够支持的多边形也就越多,对应的多边形建模的效果自然也就越像真人。而3D游戏的多边形性能也受限 于我们CPU的性能。无论你的显卡有多快,如果CPU不行,3D画面一样还是不行。 2、1999年NVidia推出的GeForce 256显卡 所以,1999年NVidia推出的GeForce 256显卡,就把顶点处理的计算能力

Efficiency of branching in shaders

时光怂恿深爱的人放手 提交于 2019-11-28 17:12:48
I understand that this question may seem somewhat ungrounded, but if someone knows anything theoretical / has practical experience on this topic, it would be great if you share it. I am attempting to optimize one of my old shaders, which uses a lot of texture lookups. I've got diffuse, normal, specular maps for each of three possible mapping planes and for some faces which are near to the user I also have to apply mapping techniques, which also bring a lot of texture lookups (like parallax occlusion mapping ). Profiling showed that texture lookups are the bottleneck of the shader and I am