projection

How to build perspective projection matrix (no API)

点点圈 提交于 2019-12-18 10:44:25
问题 I develop a simple 3D engine (Without any use of API), successfully transformed my scene into world and view space but have trouble projecting my scene (from view space) using the perspective projection matrix (OpenGL style). I'm not sure about the fov, near and far values and the scene I get is distorted. I hope if someone can direct me how to build and use the perspective projection matrix properly with example codes. Thanks in advance for any help. The matrix build: double f = 1 / Math.Tan

Include with projection does not work

筅森魡賤 提交于 2019-12-18 08:55:34
问题 I have this query var test = context.Assignments .Include(a => a.Customer) .Include(a => a.Subscriptions) .Select(a => new AssignmentWithSubscriptionCount { SubscriptionCount = a.Subscriptions.Count(), Assignment = a }) .ToList(); var name = test.First().Assignment.Customer.Name; It failes to eagerly load Customer, I've seen similar problems here on stackoverflow and it looks like you cant use projections with include. But I have not found a solution to my problem.. Anyone? edit: Here is a

How to apply a transformation matrix?

浪子不回头ぞ 提交于 2019-12-17 22:42:31
问题 I am trying to get the 2D screen coordinates of a point in 3D space, i.e. I know the location of the camera its pan, tilt and roll and I have the 3D x,y,z coordinates of a point I wish to project. I am having difficulty understanding transformation/projection matrices and I was hoping some intelligent people here could help me along ;) Here is my test code I have thrown together thus far: public class TransformTest { public static void main(String[] args) { // set up a world point (Point to

Perspective Projection in Android in an augmented reality application

前提是你 提交于 2019-12-17 16:25:14
问题 Currently I'm writing an augmented reality app and I have some problems to get the objects on my screen. It's very frustrating for me that I'm not able to transform gps-points to the correspending screen-points on my android device. I've read many articles and many other posts on stackoverflow (I've already asked similar questions) but I still need your help. I did the perspective projection which is explained in wikipedia. What do I have to do with the result of the perspective projection to

Basic render 3D perspective projection onto 2D screen with camera (without opengl)

…衆ロ難τιáo~ 提交于 2019-12-17 15:22:41
问题 Let's say I have a data structure like the following: Camera { double x, y, z /** ideally the camera angle is positioned to aim at the 0,0,0 point */ double angleX, angleY, angleZ; } SomePointIn3DSpace { double x, y, z } ScreenData { /** Convert from some point 3d space to 2d space, end up with x, y */ int x_screenPositionOfPt, y_screenPositionOfPt double zFar = 100; int width=640, height=480 } ... Without screen clipping or much of anything else, how would I calculate the screen x,y position

How exactly does OpenGL do perspectively correct linear interpolation?

我们两清 提交于 2019-12-17 06:34:36
问题 If linear interpolation happens during the rasterization stage in the OpenGL pipeline, and the vertices have already been transformed to screen-space, where does the depth information used for perspectively correct interpolation come from? Can anybody give a detailed description of how OpenGL goes from screen-space primitives to fragments with correctly interpolated values? 回答1: The output of a vertex shader is a four component vector, vec4 gl_Position . From Section 13.6 Coordinate

Calculating a LookAt matrix

孤街醉人 提交于 2019-12-17 03:51:38
问题 I'm in the midst of writing a 3d engine and I've come across the LookAt algorithm described in the DirectX documentation: zaxis = normal(At - Eye) xaxis = normal(cross(Up, zaxis)) yaxis = cross(zaxis, xaxis) xaxis.x yaxis.x zaxis.x 0 xaxis.y yaxis.y zaxis.y 0 xaxis.z yaxis.z zaxis.z 0 -dot(xaxis, eye) -dot(yaxis, eye) -dot(zaxis, eye) l Now I get how it works on the rotation side, but what I don't quite get is why it puts the translation component of the matrix to be those dot products.

Calculating a LookAt matrix

…衆ロ難τιáo~ 提交于 2019-12-17 03:50:47
问题 I'm in the midst of writing a 3d engine and I've come across the LookAt algorithm described in the DirectX documentation: zaxis = normal(At - Eye) xaxis = normal(cross(Up, zaxis)) yaxis = cross(zaxis, xaxis) xaxis.x yaxis.x zaxis.x 0 xaxis.y yaxis.y zaxis.y 0 xaxis.z yaxis.z zaxis.z 0 -dot(xaxis, eye) -dot(yaxis, eye) -dot(zaxis, eye) l Now I get how it works on the rotation side, but what I don't quite get is why it puts the translation component of the matrix to be those dot products.

correcting fisheye distortion programmatically

落爺英雄遲暮 提交于 2019-12-17 02:54:50
问题 BOUNTY STATUS UPDATE: I discovered how to map a linear lens , from destination coordinates to source coordinates. How do you calculate the radial distance from the centre to go from fisheye to rectilinear? 1). I actually struggle to reverse it, and to map source coordinates to destination coordinates. What is the inverse, in code in the style of the converting functions I posted? 2). I also see that my undistortion is imperfect on some lenses - presumably those that are not strictly linear.

How to select a single field for all documents in a MongoDB collection?

荒凉一梦 提交于 2019-12-16 19:57:14
问题 In my MongoDB, I have a student collection with 10 records having fields name and roll . One record of this collection is: { "_id" : ObjectId("53d9feff55d6b4dd1171dd9e"), "name" : "Swati", "roll" : "80", } I want to retrieve the field roll only for all 10 records in the collection as we would do in traditional database by using: SELECT roll FROM student I went through many blogs but all are resulting in a query which must have WHERE clause in it, for example: db.students.find({ "roll": { $gt: