framebuffer

using linux framebuffer for graphics but disabling console text

谁说我不能喝 提交于 2019-12-03 21:35:10
I have some c code that draws simple graphics on the linux framebuffer console. I'm also using the raspberry pi and it's composite video output. The OS is raspbian, and i'm doing a low level solution without using X. My graphics are working well, and i'm also able to read the usb keyboard and respond to key presses. Currently there is a tty terminal that my graphics are written over. The tty is still active and key presses are echoed to the screen. What I want to achieve is to disable all console messages and ttys on the framebuffer so only my graphics are shown. Does anyone have a good way of

QGLWidget and fast offscreen rendering

陌路散爱 提交于 2019-12-03 14:51:55
Is it possible to render totally offscreen in a QGLWidget with Qt without the need to repaint the scene to screen thus avoiding totally the buffers flip on monitor? I need to save every frame generated on the framebuffer but, since the sequence is made of 4000 frames and the time interval on the screen is 15ms I spend 4000*15ms=60s but I need to be much much faster than 60s (computations are not a bottleneck here, is just the update the problem). Can rendering offscreen on a framebuffer be faster? Can I avoid the monitor refresh rate in my QGLWidget? How do I render completely on framebuffer

Efficiently read the average color of the screen content rendered by XBMC

喜欢而已 提交于 2019-12-03 12:30:18
I want to get the average color of the screen content when running XBMC to change the color of a TV ambient light. XBMC is running on a small HTPC with OpenGL ES 2.0 hardware (Raspberry Pi) running a Debian-derived distribution. I guess I have to read from the screen frame buffer in which XBMC draws using OpenGL. (At least, I think and hope that XBMC renders everything using OpenGL.) Is it possible to read the OpenGL frame buffer representing the whole screen output? What am I going to need to access it? Do I also need an own render context to access the frame buffer of the screen? (I don't

THREE.js blur the frame buffer

痞子三分冷 提交于 2019-12-03 08:00:21
问题 I need to blur the frame buffer and I don't know how to get the frame buffer using THREE.js. I want to blur the whole frame buffer rather than blur each textures in the scene. So I guess I should read the frame buffer and then blur, rather than doing this in shaders. Here's what I have tried: Call when init: var renderTarget = new THREE.WebGLRenderTarget(512, 512, { wrapS: THREE.RepeatWrapping, wrapT: THREE.RepeatWrapping, minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format

Drawing into OpenGL ES framebuffer and getting UIImage from it on iPhone

大城市里の小女人 提交于 2019-12-03 03:56:07
I'm trying to do offscreen rendering of some primitives in OpenGL ES on iOS. The code is as follows: // context and neccesary buffers @interface RendererGL { EAGLContext* myContext; GLuint framebuffer; GLuint colorRenderbuffer; GLuint depthRenderbuffer; } .m file: - (id) init { self = [super init]; if (self) { // initializing context myContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2]; [EAGLContext setCurrentContext:myContext]; [self setupOpenGL]; // creating buffers } return self; } -(void) setupOpenGL { int width = 256; int height = 256; // generating buffers and binding

What is the difference between framebuffer and image in Vulkan?

旧巷老猫 提交于 2019-12-03 02:25:35
I've known that framebuffer is the final destination of the rendering pipeline and swapchain contains many image. So what is the relation between those two things? Which one is the actual render target? And does the framebuffer later attach the final picture of the current frame on the image view? If so, how will it transfer? Describing this via paint or diagram would be pleased. VkFramebuffer + VkRenderPass defines the render target. Render pass defines which attachment will be written with colors. VkFramebuffer defines which VkImageView is to be which attachment . VkImageView defines which

THREE.js blur the frame buffer

妖精的绣舞 提交于 2019-12-02 21:30:21
I need to blur the frame buffer and I don't know how to get the frame buffer using THREE.js. I want to blur the whole frame buffer rather than blur each textures in the scene. So I guess I should read the frame buffer and then blur, rather than doing this in shaders. Here's what I have tried: Call when init: var renderTarget = new THREE.WebGLRenderTarget(512, 512, { wrapS: THREE.RepeatWrapping, wrapT: THREE.RepeatWrapping, minFilter: THREE.NearestFilter, magFilter: THREE.NearestFilter, format: THREE.RGBAFormat, type: THREE.FloatType, stencilBuffer: false, depthBuffer: true }); renderTarget

OpenGL: Using only one framebuffer and switching target textures

痴心易碎 提交于 2019-12-02 07:07:48
问题 Instead of using multiple framebuffer objects, can I also create only one and achieve the same results by switching it's target texture when needed? Is this a bad idea in all cases? If yes, why? I've been implementing a function render.SetTargetTexture() in my program's API, and logically it wouldn't work if there were more framebuffers used behind the scenes. I'd have to fully expose framebuffers then. 回答1: A FBO itself is just some logical construct, maintained by the implementation and it

OpenGL: Using only one framebuffer and switching target textures

試著忘記壹切 提交于 2019-12-02 03:06:22
Instead of using multiple framebuffer objects, can I also create only one and achieve the same results by switching it's target texture when needed? Is this a bad idea in all cases? If yes, why? I've been implementing a function render.SetTargetTexture() in my program's API, and logically it wouldn't work if there were more framebuffers used behind the scenes. I'd have to fully expose framebuffers then. A FBO itself is just some logical construct, maintained by the implementation and it will consume only the little memory that its parameters require. The main purpose of a FBO is to have an

How to deal with the layouts of presentable images?

ε祈祈猫儿з 提交于 2019-12-01 22:18:10
问题 A presentable image starts out in VK_IMAGE_LAYOUT_UNDEFINED but will be VK_IMAGE_LAYOUT_PRESENT_SRC_KHR after they have been presented once. A lot of examples do a transition of all vkImages to VK_IMAGE_LAYOUT_PRESENT_SRC_KHR immediately after creating the vkSwapchain . Which allows them to use an VK_IMAGE_LAYOUT_PRESENT_SRC_KHR for oldLayout . But doing the transition right after creation of the swapchain is not allowed. Use of a presentable image must occur only after the image is returned