raytracing

Ray-Polygon Intersection Point on the surface of a sphere

試著忘記壹切 提交于 2019-12-22 07:58:30
问题 I have a point (Lat/Lon) and a heading in degrees (true north) for which this point is traveling along. I have numerous stationary polygons (Points defined in Lat/Lon) which may or may not be convex. My question is, how do I calculate the closest intersection point, if any, with a polygon. I have seen several confusing posts about Ray Tracing but they seem to all relate to 3D when the Ray and Polygon are not on the same Plane and also the Polygons must be convex. 回答1: sounds like you should

DirectX 11 compute shader for ray/mesh intersect

被刻印的时光 ゝ 提交于 2019-12-21 21:34:15
问题 I recently converted a DirectX 9 application that was using D3DXIntersect to find ray/mesh intersections to DirectX 11. Since D3DXIntersect is not available in DX11, I wrote my own code to find the intersection, which just loops over all the triangles in the mesh and tests them, keeping track of the closest hit to the origin. This is done on the CPU side and works fine for picking via the GUI, but I have another part of the application that creates a new mesh from an existing one based on

How to properly clamp beckmann distribution

夙愿已清 提交于 2019-12-21 02:35:34
问题 I am trying to implement a Microfacet BRDF shading model (similar to the Cook-Torrance model) and I am having some trouble with the Beckmann Distribution defined in this paper: https://www.cs.cornell.edu/~srm/publications/EGSR07-btdf.pdf Where M is a microfacet normal, N is the macrofacet normal and ab is a "hardness" parameter between [0, 1]. My issue is that this distribution often returns obscenely large values, especially when ab is very small. For instance, the Beckmann distribution is

How to move a camera using in a ray-tracer?

佐手、 提交于 2019-12-20 10:58:11
问题 I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet. Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200) and (width/2, -height/2, 200) [200 is just a fixed number of z, can be changed]. Addition to that, I use the camera mostly on e(0, 0, 1000) , and I use a perspective projection. I send rays from point e to pixels, and print it to image's corresponding pixel after

raytracing with CUDA

我是研究僧i 提交于 2019-12-18 10:24:40
问题 I'm currently implementing a raytracer. Since raytracing is extremely computation heavy and since I am going to be looking into CUDA programming anyway, I was wondering if anyone has any experience with combining the two. I can't really tell if the computational models match and I would like to know what to expect. I get the impression that it's not exactly a match made in heaven, but a decent speed increasy would be better than nothing. 回答1: One thing to be very wary of in CUDA is that

Detecting light projections and intersections in 2D space using C#

一曲冷凌霜 提交于 2019-12-14 02:35:19
问题 A light source is an entity in 2D space that sits in a single coordinate. There are multiple light sources around in various locations and each gives off 8 rays of light in directions N, S, E, W, NW, NE, SW, SE. The coordinates of all lights are known. I need to calculate all intersections of these rays within the grid. long width = int.MaxValue; // 2D grid width. long height = int.MaxValue * 3; // 2D grid height. List<Point> lights = a bunch of randomly placed light sources. List<Point>

Using gluUnProject to map touches to x,y cords on z=0 plane in Android OpenGL ES 2.0

Deadly 提交于 2019-12-13 08:16:19
问题 I've drawn a grid at z=0 in OpenGL 2.0 ES, and just want to convert touch inputs to x/y coordinates on that plane. It seems like this is best done through ray tracing, which involves running gluUnProject on 0 and 1, then creating a ray, solving that ray for z=0? I found this code, but it is OpenGL ES 1.0: i-schuetz / Android_OpenGL_Picking Screenshot of app running so you can see camera distortion. My code on Github, only 4 files. The unproject function I'm trying to write is in MyGLRenderer

Java vs. C++ - Raytracing

ⅰ亾dé卋堺 提交于 2019-12-12 13:03:29
问题 I created simple ray tracer in Java as a hobby project, and well, it's slow. Not dramatically slow, but slow nevertheless. I wonder if I can get any performance gain using lower level language like C or C++ or will the difference be negligible and I should stick to improving "my" algorithm? 回答1: I think the question have been answered as YES a not interpreted language will in 99.99% of the cases run faster than the same algorithm under a VM. This said (having worked a lot in image processing

Computer Graphics: Raytracing and Programming 3D Renders

て烟熏妆下的殇ゞ 提交于 2019-12-12 08:41:26
问题 I've noticed that a number of top universities are offering courses where students are taught subjects relating to Computer Graphics for their CS majors. Sadly this is something not offered by my university and something I would really like to get into sometime in the next couple of years. A couple of the projects I've found from some universities are great, although I'm mostly interested in two things: Raytracing: I want to write a Raytracer within the next two years. What do I need to know?

Lambertian Shader not working

柔情痞子 提交于 2019-12-12 03:31:53
问题 I'm trying to make a Lambertian shader for my ray tracer, but am having trouble. The scence still seems to be flat shaded, just a little darker. Such as in this picture This is my Shader Class: public class LambertianShader { public Colour diffuseColour; public LambertianShader(Colour diffuseColour){ this.diffuseColour = diffuseColour; } public Colour shade(Intersection intersection, Light light){ Vector3D lightDirection = light.location.subtract(intersection.point); lightDirection.normalise(