opengl-es

Hardware-accelerated OpenVG implementation on Desktop based on OpenGL ES [closed]

旧巷老猫 提交于 2019-12-20 10:26:10
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 3 years ago . I'm currently trying to get OpenVG up and running on my desktop. The problem comes here: I am / will be developing an application for a Windows CE device (with .NET compact framework), which has hardware-accelerated OpenGL ES 2.0 and OpenVG 1.0.1 (based on TI OMAP35x, if you're interested). The application will

Textures in OpenGL ES 2.0 for Android

筅森魡賤 提交于 2019-12-20 10:25:36
问题 I'm new to OpenGL and I'm teaching myself by making a 2D game for Android with ES 2.0. I am starting off by creating a "Sprite" class that creates a plane and renders a texture to it. To practice, I have two Sprite objects that are drawn alternating in the same place. I got this much working fine and well with ES 1.0, but now that I've switched to 2.0, I am getting a black screen with no errors . I'm exhausted trying to figure out what I'm doing wrong, but I have a strong feeling it has to do

How to record video from ARKit?

孤街浪徒 提交于 2019-12-20 10:19:33
问题 Now I'm testing ARKit/SceneKit implementation. The basic rendering to the screen is kinda working so then I wanna try recording what I see on the screen into a video. Just for recording of Scene Kit I found this Gist: // // ViewController.swift // SceneKitToVideo // // Created by Lacy Rhoades on 11/29/16. // Copyright © 2016 Lacy Rhoades. All rights reserved. // import SceneKit import GPUImage import Photos class ViewController: UIViewController { // Renders a scene (and shows it on the

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

我怕爱的太早我们不能终老 提交于 2019-12-20 10:11:14
问题 I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such: [videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange] forKey:(id)kCVPixelBufferPixelFormatTypeKey]]; and I'm passing the pixel data to my texture like this: glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422

Is it possible to use OpenGL ES code with a WPF application via a D3DImage and ANGLE?

大憨熊 提交于 2019-12-20 10:05:28
问题 Summary (TL:DR version) Ultimately our goal is to be able to utilize OpenGL ES code in a WPF application natively (i.e. not SharpGL, etc.) and without Airspace or driver issues, possible using Google's ANGLE project. Background: One of the things I like about OpenGL over DirectX is its cross-platform capability. It has excellent support on both OS X and Linux, and also on Android and iOS via ES. However, on Windows, using it is marred with driver issues, or worse, a lot of cards simply don't

Convert 3D world (arcore anchor/pose) to its corresponding 2D screen coordinates

浪尽此生 提交于 2019-12-20 09:45:58
问题 I'm struggling to get this transformation. Given an anchor Pose in arcore how can I obtain its corresponding 2D coordinates in the screen? 回答1: Finally, after some days of investigation and getting the information from different resources I was able to get this working. Following is a code snippet (based on the arcore sample java app) to convert from World coordinates (Pose in arcore) to 2D screen coordinates: First we need to calculate the matrix to transform from world --> screen: public

Screen tearing and camera capture with Metal

╄→гoц情女王★ 提交于 2019-12-20 09:40:17
问题 To avoid writing to a constant buffer from both the gpu and cpu at the same time, Apple recommends using a triple-buffered system with the help of a semaphore to prevent the cpu getting too far ahead of the gpu (this is fine and covered in at least three Metal videos now at this stage). However, when the constant resource is an MTLTexture and the AVCaptureVideoDataOutput delegate runs separately than the rendering loop (CADisplaylink), how can a similar triple-buffered system (as used in

Best way to separate game logic from rendering for a fast-paced game for Android with OpenGL?

依然范特西╮ 提交于 2019-12-20 08:49:39
问题 I've been studying and making little games for a while, and I have decided lately that I would try to develop games for Android. For me, jumping from native C++ code to Android Java wasn't that hard, but it gives me headaches to think about how could I maintain the logic separate from the rendering. I've been reading around here and on other sites that: It is better to not create another thread for it, just because Android will for sure have no problems for processing. Meaning the code would

iPhone Game Developers - What does your toolchain look like?

拟墨画扇 提交于 2019-12-20 08:07:23
问题 For example: source control: git + adobe drive 3d: google sketchup -> *.dae -> blender -> *.obj 2d: photoshop/illustrator -> *.png audio: audacity -> *.caf code: ArgoUML, Xcode, Textmate test: OCUnit build: rake, Xcode Feel free to mention any other tools that you think are awesome :) 回答1: git, github, Xcode, Interface Builder, Photoshop, Illustrator, TextMate, unit testing via Behaviour (it’s on github), custom 3D software, custom audio software. Neat idea for the post, I like it! 回答2: I use

Rendering SVG with OpenGL (and OpenGL ES)

旧时模样 提交于 2019-12-20 07:59:34
问题 I am currently investigating the possibility of rendering vector graphics from an SVG file using OpenGL and OpenGL ES. I intend to target Windows and Android. My ideal solution would be to have a minimal C library that generates a polygon triangulation from a given SVG file. This would then generate standard OpenGL or OpenGL ES calls, and use a display list or vbo for optimization when redrawing. I would simply draw a display list to draw the vector image after translating and rotating,