Convert satellite photos of Earth into texture maps on a sphere (OpenGL ES)

大城市里の小女人 提交于 2019-12-13 13:24:55

问题


We have 5 geo-stationary satellites, spaced around the equator (not equally spaced, but almost) taking photos of Earth every day. The output of each photo is - surprise! - a photo of a sphere, taken from a long distance away.

I need to reassemble those photos into a single texture-mapped sphere, and I'm not sure how best to do this. Key problems:

  1. The photos are - obviously - massively distorted the further you go from the center, since they're looking at a sphere
  2. There are many hundreds of "sets" of 5 photos, taken at different times of day. Any solution needs to be programmatic - I can't just do this by hand :(
  3. Output platform is the iPad3: Open GL ES 2, textures up to 4096x4096 - but not as powerful as a desktop GPU. I'm not great with shaders (although I've done a lot of OpenGL pre-shaders)
  4. The photos themselves are high-res, and I'm not sure I can have all 5 textures loaded simultaneously. I've also got a very high-res texture loaded for the planet surface (underneath the satellite photos).

I've already got: a single rectangular texture mapped onto a sphere (my sphere is a standard mesh wrapped into a sphere, with vertices distributed evenly across the surface), so ... I tried converting 5 photos of spheres into a single rectangular map (but no success so far; although someone pointed me at doing a "polar sin warp" which looks like it might work better).

I've also thought of doing something funky with making a cube-map out of the 5 photos, and being clever about deciding which of the photos to read for a given pixel, but I'm not entirely convinced.

Is there a better way? Something I've overlooked? Or has anyone got a concrete way of achieving the above?


回答1:


I would do a rectangular texture from it.

You will need 2 x 2D textures/arrays one for r,g,b color summation avg and one for count cnt. Also I am not convinced that I would use OpenGL/GLSL for that it seems to me that C/C++ will be better for this.

I would do it like this:

  1. blank the destination textures (avg[][]=0, cnt[][]=0)
  2. obtain satellite position/direction,time

    from position and direction create transformation matrix which projects Earth the same ways as on photo. Then from time determine rotation shift.

  3. do loop through entire Earths surface

    just two nested loops a - rotation and `b - distance from equator.

  4. get x,y,z from a,b and transform matrix + rotation shift (a-axis)

    also can do it backwards a,b,z = f(x,y) but it is more tricky but faster and more accurate. You can also interpolate x,y,z between neighboring (pixels/areas)[a][b]

  5. add pixel

    if x,y,z is on the front side (z>0 or z<0 depends on the camera Z direction) then

    avg[a][b]+=image[x][y]; cnt[a][b]++;
    
  6. end of nested loop from point #3.

  7. goto #2 with next photo
  8. do loop through entire avg texture to restore average color

    if (cnt[a][b]) avg[a][b]/=cnt[a][b];
    

[Notes]

  1. can test if the copied pixel is:

    Obtained during day or night (use only what you want and not mix both together!!!) also can determine clouds (i think gray/white-ish colors not snow) and ignore them.

  2. do not overflow the colors

    can use 3 separate textures r[][],g[][],b[][] instead avg to avoid that

  3. can ignore areas near edges of Earth to avoid distortions

  4. can apply lighting corrections

    from time and a,b coordinates to normalize illumination

Hope it helps ...

[Edit1] orthogonal projection

so its clear here is what I mean by orthogonal projection:

this is used texture (cant find nothing better suited and free on the web) and wanted to use real satellite image not some rendered ...

this is my orthogonal projection App

  • the red,green,blue lines are Earth coordinate system (x,y,z axis)
  • the (red,green,blue)-white-ish lines are satellite projection coordinate system (x,y,z axis)

the point is to to convert earth vertex coordinates (vx,vy,vz) to satellite coordinates (x,y,z) if z >= 0 then its the valid vertex for processed texture so compute texture coordinates directly from x,y without any perspective (orthogonally).

for example tx=0.5*(+x+1); ... if x was scaled to <-1,+1> and usable texture is tx <0,1> The same goes for y axis: ty=0.5*(-y+1); ... if y was scaled to <-1,+1> and usable texture is ty <0,1> (my camera has inverted y coordinate system respective to texture matrix therefore the inverted sign on y axis)

if the z < 0 then you are processing vertex out of texture range so ignore it ... as you can see on the image the outer boundaries of texture are distorted so you should use only the inside (for example 70% of earth image area) also you can do some kind of texture coordinates correction dependent on the distance from texture middle point. When you have this done then just merge all satellite image projection to one image and that is all.

[Edit2] Well I played with it a little and found out this:

  • reverse projection correction do not work for my texture at all I think that is possible it is post processed image ...
  • middle point distance based correction seems be nice but the scale coefficient used is odd have no clue why to multiply by 6 when it should be 4 I think ...

    tx=0.5*(+(asin(x)*6.0/M_PI)+1); 
    ty=0.5*(-(asin(y)*6.0/M_PI)+1); 
    

  • corrected nonlinear projection (by asin)

  • corrected nonlinear projection edge zoom
  • distortions are much much smaller then without asin texture coordinate corrections


来源:https://stackoverflow.com/questions/11180015/convert-satellite-photos-of-earth-into-texture-maps-on-a-sphere-opengl-es

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!