coordinate-transformation

Gyro Sensor drift and Correct angle Estimation

大憨熊 提交于 2019-11-30 04:55:24
问题 I am using LG Optimus 2x smartphone(Gyroscope and Accelerometer sensor) for positioning. I want to get correct rotation angles from gyroscope that can be used later on for body to earth coordinate transformation. My question is that How I can measure and remove the drift in gyro sensor. The one way is to take the average of gyro samples (when mobile is in static condition) for some time and subtracting from current sample, which is not good way. When the mobile is in rotation/motion how to

Camera pose estimation: How do I interpret rotation and translation matrices?

谁说胖子不能爱 提交于 2019-11-30 04:30:11
Assume I have good correspondences between two images and attempt to recover the camera motion between them. I can use OpenCV 3's new facilities for this, like this: Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 1, mask); int inliers = recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask); Mat mtxR, mtxQ; Mat Qx, Qy, Qz; Vec3d angles = RQDecomp3x3(R, mtxR, mtxQ, Qx, Qy, Qz); cout << "Translation: " << t.t() << endl; cout << "Euler angles [x y z] in degrees: " << angles.t() << endl; Now, I have trouble wrapping my head around what R and t

CSS3 zooming on mouse cursor

三世轮回 提交于 2019-11-29 23:16:03
My goal is to create a plugin that enables zooming & panning operations on a page area, just like how Google Maps currently works (meaning: scrolling with the mouse = zooming in/out of the area, click & hold & move & release = panning). When scrolling, I wish to have a zoom operation centered on the mouse cursor. For this, I use on-the-fly CSS3 matrix transformations. The only, yet mandatory, constraint is that I cannot use anything else than CSS3 translate & scale transformations, with a transform origin of 0px 0px. Panning is out of the scope of my question, since I have it working already.

ECEF from Azimuth, Elevation, Range and Observer Lat,Lon,Alt

可紊 提交于 2019-11-29 23:11:21
问题 I'm trying to write a basic python script that will track a given satellite, defined with tle's, from a given location. I'm not a asto/orbital person but am trying to become smarter on it. I am running into a problem when I try to convert the azimuth, elevation, range values to a ECEF position. I'm using PyEphem to get the observation values and spg4 to get the real location to verify. I'm also using the website, http://www.n2yo.com/?s=25544, the verify the values. I'm getting the observed

How to calculate SVG transform matrix from rotate/translate/scale values?

末鹿安然 提交于 2019-11-29 20:22:12
I have following details with me : <g transform="translate(20, 50) scale(1, 1) rotate(-30 10 25)"> Need to change above line to: <g transform="matrix(?,?,?,?,?,?)"> Can anyone help me to achieve this? Translate(tx, ty) can be written as the matrix: 1 0 tx 0 1 ty 0 0 1 Scale(sx, sy) can be written as the matrix: sx 0 0 0 sy 0 0 0 1 Rotate(a) can be written as a matrix: cos(a) -sin(a) 0 sin(a) cos(a) 0 0 0 1 Rotate(a, cx, cy) is the combination of a translation by (-cx, cy), a rotation of a degrees and a translation back to (cx, cy), which gives: cos(a) -sin(a) -cx × cos(a) + cy × sin(a) + cx

How to project top and bottom area of openGL control

回眸只為那壹抹淺笑 提交于 2019-11-29 15:55:18
Using below code I can display an image in openGL control. Which is in rectangular shape. Now I want to project top and bottom area of this rectangular to a cylindrical shape.I mean need to perform a rectangular to cylidrical projection on openGL. How can I achieve this? private void CreateShaders() { /***********Vert Shader********************/ vertShader = GL.CreateShader(ShaderType.VertexShader); GL.ShaderSource(vertShader, @"attribute vec3 a_position; varying vec2 vTexCoord; void main() { vTexCoord = (a_position.xy + 1) / 2; gl_Position = vec4(a_position, 1); }"); GL.CompileShader

Python - Batch convert GPS positions to Lat Lon decimals

|▌冷眼眸甩不掉的悲伤 提交于 2019-11-29 13:40:55
Hi I have a legacy db with some positional data. The fields are just text fields with strings like this 0°25'30"S, 91°7'W . Is there some way I can convert these to two floating point numbers for Decimal Latitude and Decimal Longitude ? EDIT: So an example would be: 0°25'30"S, 91°7'W -> 0.425 , 91.116667 where the original single field position yields two floats. Any help much appreciated. fraxel This approach can deal with seconds and minutes being absent, and I think handles the compass directions correctly: # -*- coding: latin-1 -*- def conversion(old): direction = {'N':1, 'S':-1, 'E': 1,

OpenGL transforming objects with multiple rotations of Different axis

拈花ヽ惹草 提交于 2019-11-29 11:56:38
I am building a modeling program and I'd like to do transformations on objects in their own space and then assign that single object to a group to rotate around another axis which the group rotates around. However, I'd also like to be able to do transformations in the object's own space when it's combined. Manipulating the individual object, I pick the object's center. glm::mat4 transform; transform = glm::translate(transform, - obj.meshCenter); glm::mat4 transform1; transform1 = glm::translate(transform1, obj.meshCenter); obj.rotation = transform1*obj.thisRot*transform; I then send this off

how to perform coordinates affine transformation using python?

浪子不回头ぞ 提交于 2019-11-29 04:06:38
问题 I would like to perform transformation for this example data set. There are four known points with coordinates x, y, z in one coordinate[primary_system] system and next four known points with coordinates x, y, h that belong to another coordinate system[secondary_system]. Those points correspond; for example primary_system1 point and secondary_system1 point is exactly the same point but we have it's coordinates in two different coordinate systems. So I have here four pairs of adjustment points

Camera pose estimation: How do I interpret rotation and translation matrices?

我的梦境 提交于 2019-11-29 01:13:54
问题 Assume I have good correspondences between two images and attempt to recover the camera motion between them. I can use OpenCV 3's new facilities for this, like this: Mat E = findEssentialMat(imgpts1, imgpts2, focal, principalPoint, RANSAC, 0.999, 1, mask); int inliers = recoverPose(E, imgpts1, imgpts2, R, t, focal, principalPoint, mask); Mat mtxR, mtxQ; Mat Qx, Qy, Qz; Vec3d angles = RQDecomp3x3(R, mtxR, mtxQ, Qx, Qy, Qz); cout << "Translation: " << t.t() << endl; cout << "Euler angles [x y z