transformation

Finding centre of rotation for a set of points [closed]

拥有回忆 提交于 2019-12-03 02:40:32
If I have an arbitrary set of points, and then the same set of points rotated by some degree, does anyone know of any algorithms to calculate/estimate where the centre of the rotation is? Or an area of study where these kinds of algorithms are needed? I am having trouble finding any relevant information. Thanks Lets say you have one point (x, y), that moved to (x', y'). Then the center of rotation must lie on the line that is perpendicular to (x,y)-(x',y'), and that intersects the center (x,y)-(x',y'). Now take another point, (x2, y2), that moved to (x'2, y'2). This also gives rise to a line

Exact definition of the matrices in OpenCv StereoRectify

那年仲夏 提交于 2019-12-03 01:38:12
Normally the definition of a projection matrix P is the 3x4 matrix which projects point from world coordinates to image/pixel coordinates. The projection matrix can be split up into: K : a 3x4 camera matrix K with the intrinsic parameters T : a 4x4 transformation matrix with the extrinsic parameters The projection matrix is then P = K * T . What are the clear definitions of the following input to OpenCV's stereoRectify: cameraMatrix1 – First camera matrix (I assume it is the instrinsic K part of the projection matrix, correct?) . R – Rotation matrix between the coordinate systems of the first

How do I get mogenerator to recognize the proper type for Transformable attributes?

主宰稳场 提交于 2019-12-03 01:15:42
问题 I have a Core Data model with a single transformable attribute. I also have this attribute use a custom NSValueTransformer, setup in the model properly. When I use mogenerator to generate/update my machine and human files, the machine files for the entity containing this attribute always type the attribute to NSObject. In order for Core Data to use my custom value transformer, this type needs to be the type the transformer understands. Right now, I manually do this in the human file by

How to move a camera using in a ray-tracer?

六眼飞鱼酱① 提交于 2019-12-03 00:42:22
I am currently working on ray-tracing techniques and I think I've made a pretty good job; but, I haven't covered camera yet. Until now, I used a plane fragment for view plane which is located between (-width/2, height/2, 200) and (width/2, -height/2, 200) [200 is just a fixed number of z, can be changed]. Addition to that, I use the camera mostly on e(0, 0, 1000) , and I use a perspective projection. I send rays from point e to pixels, and print it to image's corresponding pixel after calculating the pixel color. Here is a image I created. Hopefully you can guess where eye and view plane are

How can picture of page be straightened out to look as if it was scanned?

此生再无相见时 提交于 2019-12-02 21:55:53
问题 I have seen apps, and wondered how can I programmatically take a picture of image. Define how it needs to be transformed so that it looks parallel to camera and not skewed perspective wise. Then combine multiple photos to create a pdf file. For example this app does it: https://play.google.com/store/apps/details?id=com.appxy.tinyscan&hl=en 回答1: I do not use books for such trivial things so sorry I can not recommend any (especially in English). What you need to do is this: input image find

How to create a cylinderical bone stored as a vector made of 2 points (Head, Tail)?

雨燕双飞 提交于 2019-12-02 21:36:41
问题 This is what I want, somebody wrote somewhere on net but i never use quaternion before, so have no idea how to implement it. I am sure it would be just a matter of one simple equation but how to implement in c/c++ code? Here: "You could use a bone stored as a vector made of 2 points (Head,Tail). Since you are rotating it, Head will be the fulcrum and Tail will rotate around an arbitrary axis. That's a quaternion's job." I have absolute positions of all vertices of a cylindrical mesh, now if I

Simplify Couchdb JSON response

╄→гoц情女王★ 提交于 2019-12-02 19:26:41
I'm storing location data in Couchdb, and am looking for a way to get an array of just the values, instead of key: value for every record. For example: The current response {"total rows": 250, "offset": 0, "rows":[ {"id": "ec5de6de2cf7bcac9a2a2a76de5738e4", "key": "user1", "value": {"city": "San Francisco", "address":"1001 Bayhill Dr"}, {"id": "ec5de6de2cf7bcac9a2a2a76de573ae4","key": "user1", "value": {"city": "Palo Alto", "address":"583 Waverley St"} ... (etc). ]} I only really need: [{"city": "San Francisco", "address":"1001 Bayhill Dr"}, {"city": "Palo Alto", "address":"583 Waverley St"},

CGAffineTransform scale and translation - jump before animation

为君一笑 提交于 2019-12-02 18:09:10
I am struggling with an issue regarding CGAffineTransform scale and translation where when I set a transform in an animation block on a view that already has a transform the view jumps a bit before animating. Example: // somewhere in view did load or during initialization var view = UIView() view.frame = CGRectMake(0,0,100,100) var scale = CGAffineTransformMakeScale(0.8,0.8) var translation = CGAffineTransformMakeTranslation(100,100) var concat = CGAffineTransformConcat(translation, scale) view.transform = transform // called sometime later func buttonPressed() { var secondScale =

how to use the Box-Cox power transformation in R

别说谁变了你拦得住时间么 提交于 2019-12-02 16:42:34
I need to transform some data into a 'normal shape' and I read that Box-Cox can identify the exponent to use to transform the data. For what I understood car::boxCoxVariable(y) is used for response variables in linear models, and MASS::boxcox(object) for a formula or fitted model object. So, because my data are the variable of a dataframe, the only function I found I could use is: car::powerTransform(dataframe$variable, family="bcPower") Is that correct? Or am I missing something? The second question is about what to do after I obtain the Estimated transformation parameters dataframe$variable

Understanding `scale` in R

微笑、不失礼 提交于 2019-12-02 16:07:26
I'm trying to understand the definition of scale that R provides. I have data ( mydata ) that I want to make a heat map with, and there is a VERY strong positive skew. I've created a heatmap with a dendrogram for both scale(mydata) and log(my data) , and the dendrograms are different for both. Why? What does it mean to scale my data, versus log transform my data? And which would be more appropriate if I want to look at the dendrogram illustrating the relationship between the columns of my data? Thank you for any help! I've read the definitions but they are whooping over my head. log simply