How to generate texture mapping images?

那年仲夏 提交于 2021-02-07 14:33:24

问题


I want to put/wrap images to 3D objects. To keep things simple and fast, instead of using(and learning) a 3D library I want to use mapping images. Mapping images are used in such a way:

mapping usage

So you generate the mapping images once for each object and use the same mapping for all images you want to wrap.

My question is how can I generate such mapping images (given the 3D model)? Since I don't know about the terminology my searches failed me. Sorry if I am using the wrong jargon.

Below you can see a description of the workflow.
enter image description here
I have the 3D model of the object and the input image, i want to generate mapping images that I can use to generate the textured image.

I don't even know where to start, any pointers are appreciated.

More info

My initial idea was to somehow wrap a identity mappings (see below) using an external program. I have generated horizontal and vertical gradient images in Photoshop just to see if mapping works using photoshop generated images. The result doesn't look good. I wasn't hopeful but it was worth a shot.

input
enter image description here

mappings (x and y), they just resize the image, they don't do anything fancy.
enter image description hereenter image description here

result
enter image description here
as you can see there are lots of artifacts. Custom mapping images I have generated by warping the gradients even looks worse.

Here is some more information on mappings: http://www.imagemagick.org/Usage/mapping/#distortion_maps

I am using OpenCV remap() function for mapping.


回答1:


if i understand you right here, you want to do all of it in 2D ?

calling warpPerspective() for each of your cube surfaces will be much more successful, than using remap()

pseudocode outline:

// for each surface:
//  get the desired src and dst polygon
//  the src one is your texture-image, so that's:
     vector<Point> p_src(4), p_dst(4); 
     p_src[0] = Point(0,0); 
     p_src[1] = Point(0,src.rows-1); 
     p_src[2] = Point(src.cols-1,0);
     p_src[3] = Point(src.cols-1,src.rows-1);
// the dst poly is the one you want textured, a 3d->2d projection of the cube surface.
// sorry, you've got to do that on your own ;(
// let's say, you've come up with this for the cube - top:
     p_dst[0] = Point(15,15); 
     p_dst[1] = Point(44,19); 
     p_dst[2] = Point(56,30);
     p_dst[3] = Point(33,44);

// now you need the projection matrix to transform from one to another:
Mat proj = getPerspectiveTransform( p_src, p_dst );

// finally, you can warp your texture to the dst-polygon:
warpPerspective(src, dst, proj, dst.size());

if you can get hold of the 'Learning Opencv' book, it's described around p 170.

final word of warning, since youre complaining about artefacts, - yes, it'll all look pretty cheesy, 'real' 3d engines do a lot of work here, subpixel-uv mapping, filtering, mipmapping, etc. if you want it to look nice, consider using the 'real' thing.

btw, there's nice opengl support built into opencv




回答2:


To achieve what you are trying to do, you need to render the 3D-models UV to a texture. It will be easier to learn to render 3D than to do things this way. Especially since there are a lot of weaknesses in your aproach. difficult to to lighting and problems til the depth-buffer will be abundant.

Assuming all your objects shul ever only be viewed from one angle, you need to render each of them to 3 textures:

UV-map
Normal-map
Depth-map (to correct the depth-buffer)

You will still have to do shading in order to draw these to look like your object, and I don't even know how to do the depth-buffer-thing, I just know it can be done.

So in order to avoid learning 3D, your will have to learn all the difficult parts of 3D-rendering. Does not seem the easier route...



来源:https://stackoverflow.com/questions/14878467/how-to-generate-texture-mapping-images

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!