coordinate-transformation

extrinsic matrix computation with opencv

吃可爱长大的小学妹 提交于 2019-12-22 11:44:10
问题 I am using opencv to calibrate my webcam. So, what I have done is fixed my webcam to a rig, so that it stays static and I have used a chessboard calibration pattern and moved it in front of the camera and used the detected points to compute the calibration. So, this is as we can find in many opencv examples (https://docs.opencv.org/3.1.0/dc/dbb/tutorial_py_calibration.html) Now, this gives me the camera intrinsic matrix and a rotation and translation component for mapping each of these

cv::undistortPoints() - iterative algorithm explanation

本秂侑毒 提交于 2019-12-21 23:22:58
问题 I'm trying to understand the logic behind the OpenCV's cv::undisortPoints()' iterative approximation algorithm. The implementation is available at: https://github.com/Itseez/opencv/blob/master/modules/imgproc/src/undistort.cpp (lines 361-368). The way I see it: using last best guessed pixel position (x, y), try to find better guess by applying inverse of the 'distortion at current best guess', and adjust the pixel position in regard to the initial distorted position (x0, y0) use initial

Determining a homogeneous affine transformation matrix from six points in 3D using Python

☆樱花仙子☆ 提交于 2019-12-21 02:52:23
问题 I am given the locations of three points: p1 = [1.0, 1.0, 1.0] p2 = [1.0, 2.0, 1.0] p3 = [1.0, 1.0, 2.0] and their transformed counterparts: p1_prime = [2.414213562373094, 5.732050807568877, 0.7320508075688767] p2_prime = [2.7677669529663684, 6.665063509461097, 0.6650635094610956] p3_prime = [2.7677669529663675, 5.665063509461096, 1.6650635094610962] The affine transformation matrix is of the form trans_mat = np.array([[…, …, …, …], […, …, …, …], […, …, …, …], […, …, …, …]]) such that with

Python satellite tracking with spg4, pyephem- positions not matching

感情迁移 提交于 2019-12-19 23:24:31
问题 I'm trying to write a basic python scrip that will track a given satellite, defined with tle's, from a given location. I'm not a asto/orbital person but am trying to become smarter on it. I am running into a problem where the different models I'm using are giving me very different position answers. I have tried using: pyEphem spg4 predict (exec system call from script) The satellites I'm testing with are the ISS and directv10 (one fixed, one moving- with internet tracking available for

How to calculate SVG transform matrix from rotate/translate/scale values?

亡梦爱人 提交于 2019-12-18 10:09:12
问题 I have following details with me : <g transform="translate(20, 50) scale(1, 1) rotate(-30 10 25)"> Need to change above line to: <g transform="matrix(?,?,?,?,?,?)"> Can anyone help me to achieve this? 回答1: Translate(tx, ty) can be written as the matrix: 1 0 tx 0 1 ty 0 0 1 Scale(sx, sy) can be written as the matrix: sx 0 0 0 sy 0 0 0 1 Rotate(a) can be written as a matrix: cos(a) -sin(a) 0 sin(a) cos(a) 0 0 0 1 Rotate(a, cx, cy) is the combination of a translation by (-cx, cy), a rotation of

OpenGL transforming objects with multiple rotations of Different axis

放肆的年华 提交于 2019-12-18 07:07:14
问题 I am building a modeling program and I'd like to do transformations on objects in their own space and then assign that single object to a group to rotate around another axis which the group rotates around. However, I'd also like to be able to do transformations in the object's own space when it's combined. Manipulating the individual object, I pick the object's center. glm::mat4 transform; transform = glm::translate(transform, - obj.meshCenter); glm::mat4 transform1; transform1 = glm:

Android OpenGL ES 2.0 screen coordinates to world coordinates

狂风中的少年 提交于 2019-12-17 17:41:46
问题 I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it. This is my attempt.... public void getWorldFromScreen(float x, float y) { int viewport[] = { 0, 0, width , height}; float startY = ((float) (height) - y); float[] near = { 0.0f, 0.0f, 0.0f, 0.0f }; float[

Android OpenGL ES 2.0 screen coordinates to world coordinates

烂漫一生 提交于 2019-12-17 17:41:32
问题 I'm building an Android application that uses OpenGL ES 2.0 and I've run into a wall. I'm trying to convert screen coordinates (where the user touches) to world coordinates. I've tried reading and playing around with GLU.gluUnProject but I'm either doing it wrong or just don't understand it. This is my attempt.... public void getWorldFromScreen(float x, float y) { int viewport[] = { 0, 0, width , height}; float startY = ((float) (height) - y); float[] near = { 0.0f, 0.0f, 0.0f, 0.0f }; float[

How to set transform origin in SVG

醉酒当歌 提交于 2019-12-17 04:14:51
问题 I need to resize and rotate certain elements in SVG document using javascript. The problem is, by default, it always applies the transform around the origin at (0, 0) – top left. How can I re-define this transform anchor point? I tried using the transform-origin attribute, but it does not affect anything. This is how I did it: svg.getDocumentById('someId').setAttribute('transform-origin', '75 240'); It does not seem to set the pivotal point to the point I specified although I can see in

Python astropy: convert velocities from ECEF to J2000 coordinate system

夙愿已清 提交于 2019-12-13 15:24:38
问题 I've written a code to transform the coordinates from Earth fixed system to inertial frame using astropy: from astropy import coordinates as coord from astropy import units as u from astropy.time import Time from astropy import time now = Time('2018-03-14 23:48:00') # position of satellite in GCRS or J20000 ECI: xyz=[-6340.40130292,3070.61774516,684.52263588] cartrep = coord.CartesianRepresentation(*xyz, unit=u.km) gcrs = coord.ITRS(cartrep, obstime=now) itrs = gcrs.transform_to(coord.GCRS