render-to-texture

Rendering to texture - ClearRenderTargetView() works, but none objects are rendered to texture (rendering to screen works fine)

拜拜、爱过 提交于 2019-12-11 09:22:35
问题 I try to render the scene to texture which should be then displayed in corner of the screen. I though that I can do that this way: Render the scene (my Engine::render() method that will set shaders and make draw calls) - works ok . Change render target to the texture. Render the scene again - does not work. The context->ClearRenderTargetView(texture->getRenderTargetView(), { 1.0f, 0.0f, 0.0f, 1.0f } ) does set my texture to red color (for scene in step 1. I use different color), but none

Render to texture using only OpenGL Context ID (Java)

拟墨画扇 提交于 2019-12-11 05:45:31
问题 We need to render to texture entire game window. We have only java SDK jars from our client, and we can access only OpenGL Window context ID of window they create when game runs. My question is, is window context enough to somehow render it to texture? We cannot alter code of our client, but we need to render Editor windows on top of their java SDK. They are using LWJGL for rendering. Plan is to render game into separate window, similar to this: I guess this can be only achieved via mentioned

transparency issues with repeated stamping of textures on an MTKView

北慕城南 提交于 2019-12-10 11:43:49
问题 I am trying to implement a metal-backed drawing application where brushstrokes are drawn on an MTKView by stamping a textured square repeatedly along a path. The problem I'm having is that, while each brush stamp properly shows the texture's opacity, overlapping squares do not build value, but rather override each other. In the caption below, each stamp is a textured circle with an alpha component I have a feeling that because all the stamps are being rendered at once, there is no way for the

Writing to then reading from an offscreen FBO on iPhone; works on simulator but not on device?

时间秒杀一切 提交于 2019-12-10 02:06:34
问题 I'm trying to do some image manipulation on the iPhone, basing things on the GLImageProcessing example from Apple. Ultimately what I'd like to do is to load an image into a texture, perform one or more of the operations in the example code (hue, saturation, brightness, etc.), then read the resulting image back out for later processing/saving. For the most part, this would never need to touch the screen, so I thought that FBOs might be the way to go. To start with, I've cobbled together a

UIImage created from MTKView results in color/opacity differences

两盒软妹~` 提交于 2019-12-09 06:47:41
问题 When I capture the contents of an MTKView into a UIImage, the resulting image looks qualitatively different, as shown below: The code I use to generate the UIImage is as follows: let kciOptions = [kCIContextWorkingColorSpace: CGColorSpace(name: CGColorSpace.sRGB)!, kCIContextOutputPremultiplied: true, kCIContextUseSoftwareRenderer: false] as [String : Any] let lastDrawableDisplayed = self.currentDrawable! // needed to hold the last drawable presented to screen drawingUIView.image = UIImage

How to render anti-aliased image to a texture (and then write to PNG)?

强颜欢笑 提交于 2019-12-08 12:51:02
问题 I'd like to use the "render to texture" paradigm to write a .png screenshot of my OpenGL 3D rendering. I have it working without multi-sampling, but I'm struggling to get an anti-aliased image. First of all, is this possible? Second, what is the right combination of API calls? (meta third question, how can I better debug this? glCheckFramebufferStatus is clearly not enough). Here's what I'm working with: // https://stackoverflow.com/questions/7402504/multisampled-render-to-texture-in-ios

OpenGL render portion of screen to texture

白昼怎懂夜的黑 提交于 2019-12-08 08:13:03
问题 I am trying to render a small region of the screen to an off-screen texture. This is part of a screenshot function in my app where the user selects a region on the screen and saves this to an image. While the region on the screen might be 250x250px, the saved image can be a lot larger like 1000x1000px. I understand the process of rendering to a texture using an FBO. I'm mostly stuck when it comes to defining the projection matrix that clips the scene so that only the screenshot region is

Copy ffmpeg d3dva texture resource to shared rendering texture

隐身守侯 提交于 2019-12-06 08:55:54
问题 I'm using ffmpeg to decode video via d3dva based on this example https://github.com/FFmpeg/FFmpeg/blob/master/doc/examples/hw_decode.c. I'm able to succesfully decode video. What I need to do next is to render decoded NV12 frame. I have created directx rendering texture based on this example https://github.com/balapradeepswork/D3D11NV12Rendering and set it as shared. D3D11_TEXTURE2D_DESC texDesc; texDesc.Format = DXGI_FORMAT_NV12; // Pixel format texDesc.Width = width; // Width of the video

Render to texture problem with alpha

亡梦爱人 提交于 2019-12-05 17:21:46
When I render to texture, and then draw the same image, it seems to make everything darker. To get this image: http://img24.imageshack.us/img24/8061/87993367.png I'm rendering the upper-left square with color (1, 1, 1, .8) to a texture, then rendering that texture, plus the middle square (same color) to another texture, then finally that texture plus the lower-right square (same color) to the screen. As you can see, each time I render to texture, everything gets a little darker. My render-to-texture code looks like: (I'm using OpenGL ES on the iPhone) // gen framebuffer GLuint framebuffer;

Writing to then reading from an offscreen FBO on iPhone; works on simulator but not on device?

此生再无相见时 提交于 2019-12-05 01:31:38
I'm trying to do some image manipulation on the iPhone, basing things on the GLImageProcessing example from Apple. Ultimately what I'd like to do is to load an image into a texture, perform one or more of the operations in the example code (hue, saturation, brightness, etc.), then read the resulting image back out for later processing/saving. For the most part, this would never need to touch the screen, so I thought that FBOs might be the way to go. To start with, I've cobbled together a little example that creates an offscreen FBO, draws to it, then reads the data back out as an image. I was