proportions of a perspective-deformed rectangle

早过忘川 提交于 2019-12-28 01:43:09

问题


Given a 2d picture of a rectangle distorted by perspective:

I know that the shape was originally a rectangle, but I do not know its original size.

If I know the pixel coordinates of the corners in this picture, how can I calculate the original proportions, i.e. the quotient ( width / height ) of the rectangle?

(background: the goal is to automatically undistort photos of rectangular documents, edge detection will probably be done with hough transform)

UPDATE:

There has been some discussion on whether it is possible at all to determine the width:height ratio with the information given. My naive thought was that it must be possible, since I can think of no way to project for example a 1:4 rectangle onto the quadrangle depicted above. The ratio appears clearly close to 1:1, so there should be a way to determine it mathematically. I have however no proof for this beyond my intuitive guess.

I have not yet fully understood the arguments presented below, but I think there must be some implicit assumption that we are missing here and that is interpreted differently.

However, after hours of searching, I have finally found some papers relevant to the problem. I am struggling to understand the math used in there, so far without success. Particularly the first paper seems to discuss exactly what I wanted to do, unfortunately without code examples and very dense math.

  • Zhengyou Zhang , Li-Wei He, "Whiteboard scanning and image enhancement" http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf p.11

    "Because of the perspective distortion, the image of a rectangle appears to be a quadrangle. However, since we know that it is a rectangle in space, we are able to estimate both the camera’s focal length and the rectangle’s aspect ratio."

  • ROBERT M. HARALICK "Determining camera parameters from the perspective projection of a rectangle" http://portal.acm.org/citation.cfm?id=87146

    "we show how to use the 2D perspective projection of a rectangle of unknown size and position in 3D space to determine the camera look angle parameters relative to the plans of the rectangle."


回答1:


Here is my attempt at answering my question after reading the paper

  • Zhengyou Zhang , Li-Wei He, "Whiteboard scanning and image enhancement" http://research.microsoft.com/en-us/um/people/zhang/papers/tr03-39.pdf

I manipulated the equations for some time in SAGE, and came up with this pseudo-code in c-style:


// in case it matters: licensed under GPLv2 or later
// legend:
// sqr(x)  = x*x
// sqrt(x) = square root of x

// let m1x,m1y ... m4x,m4y be the (x,y) pixel coordinates
// of the 4 corners of the detected quadrangle
// i.e. (m1x, m1y) are the cordinates of the first corner, 
// (m2x, m2y) of the second corner and so on.
// let u0, v0 be the pixel coordinates of the principal point of the image
// for a normal camera this will be the center of the image, 
// i.e. u0=IMAGEWIDTH/2; v0 =IMAGEHEIGHT/2
// This assumption does not hold if the image has been cropped asymmetrically

// first, transform the image so the principal point is at (0,0)
// this makes the following equations much easier
m1x = m1x - u0;
m1y = m1y - v0;
m2x = m2x - u0;
m2y = m2y - v0;
m3x = m3x - u0;
m3y = m3y - v0;
m4x = m4x - u0;
m4y = m4y - v0;


// temporary variables k2, k3
double k2 = ((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x) /
            ((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) ;

double k3 = ((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y - m1y*m4x) / 
            ((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) ;

// f_squared is the focal length of the camera, squared
// if k2==1 OR k3==1 then this equation is not solvable
// if the focal length is known, then this equation is not needed
// in that case assign f_squared= sqr(focal_length)
double f_squared = 
    -((k3*m3y - m1y)*(k2*m2y - m1y) + (k3*m3x - m1x)*(k2*m2x - m1x)) / 
                      ((k3 - 1)*(k2 - 1)) ;

//The width/height ratio of the original rectangle
double whRatio = sqrt( 
    (sqr(k2 - 1) + sqr(k2*m2y - m1y)/f_squared + sqr(k2*m2x - m1x)/f_squared) /
    (sqr(k3 - 1) + sqr(k3*m3y - m1y)/f_squared + sqr(k3*m3x - m1x)/f_squared) 
) ;

// if k2==1 AND k3==1, then the focal length equation is not solvable 
// but the focal length is not needed to calculate the ratio.
// I am still trying to figure out under which circumstances k2 and k3 become 1
// but it seems to be when the rectangle is not distorted by perspective, 
// i.e. viewed straight on. Then the equation is obvious:
if (k2==1 && k3==1) whRatio = sqrt( 
    (sqr(m2y-m1y) + sqr(m2x-m1x)) / 
    (sqr(m3y-m1y) + sqr(m3x-m1x))


// After testing, I found that the above equations 
// actually give the height/width ratio of the rectangle, 
// not the width/height ratio. 
// If someone can find the error that caused this, 
// I would be most grateful.
// until then:
whRatio = 1/whRatio;

Update: here is how these equations where determined:

The following is code in SAGE. It can be accessed online at http://www.sagenb.org/home/pub/704/. (Sage is really useful in solving equations, and useable in any browser, check it out)

# CALCULATING THE ASPECT RATIO OF A RECTANGLE DISTORTED BY PERSPECTIVE

#
# BIBLIOGRAPHY:
# [zhang-single]: "Single-View Geometry of A Rectangle 
#  With Application to Whiteboard Image Rectification"
#  by Zhenggyou Zhang
#  http://research.microsoft.com/users/zhang/Papers/WhiteboardRectification.pdf

# pixel coordinates of the 4 corners of the quadrangle (m1, m2, m3, m4)
# see [zhang-single] figure 1
m1x = var('m1x')
m1y = var('m1y')
m2x = var('m2x')
m2y = var('m2y')
m3x = var('m3x')
m3y = var('m3y')
m4x = var('m4x')
m4y = var('m4y')

# pixel coordinates of the principal point of the image
# for a normal camera this will be the center of the image, 
# i.e. u0=IMAGEWIDTH/2; v0 =IMAGEHEIGHT/2
# This assumption does not hold if the image has been cropped asymmetrically
u0 = var('u0')
v0 = var('v0')

# pixel aspect ratio; for a normal camera pixels are square, so s=1
s = var('s')

# homogenous coordinates of the quadrangle
m1 = vector ([m1x,m1y,1])
m2 = vector ([m2x,m2y,1])
m3 = vector ([m3x,m3y,1])
m4 = vector ([m4x,m4y,1])


# the following equations are later used in calculating the the focal length 
# and the rectangle's aspect ratio.
# temporary variables: k2, k3, n2, n3

# see [zhang-single] Equation 11, 12
k2_ = m1.cross_product(m4).dot_product(m3) / m2.cross_product(m4).dot_product(m3)
k3_ = m1.cross_product(m4).dot_product(m2) / m3.cross_product(m4).dot_product(m2)
k2 = var('k2')
k3 = var('k3')

# see [zhang-single] Equation 14,16
n2 = k2 * m2 - m1
n3 = k3 * m3 - m1


# the focal length of the camera.
f = var('f')
# see [zhang-single] Equation 21
f_ = sqrt(
         -1 / (
          n2[2]*n3[2]*s^2
         ) * (
          (
           n2[0]*n3[0] - (n2[0]*n3[2]+n2[2]*n3[0])*u0 + n2[2]*n3[2]*u0^2
          )*s^2 + (
           n2[1]*n3[1] - (n2[1]*n3[2]+n2[2]*n3[1])*v0 + n2[2]*n3[2]*v0^2
          ) 
         ) 
        )


# standard pinhole camera matrix
# see [zhang-single] Equation 1
A = matrix([[f,0,u0],[0,s*f,v0],[0,0,1]])


#the width/height ratio of the original rectangle
# see [zhang-single] Equation 20
whRatio = sqrt (
               (n2*A.transpose()^(-1) * A^(-1)*n2.transpose()) / 
               (n3*A.transpose()^(-1) * A^(-1)*n3.transpose())
              ) 

The simplified equations in the c-code where determined by

print "simplified equations, assuming u0=0, v0=0, s=1"
print "k2 := ", k2_
print "k3 := ", k3_
print "f  := ", f_(u0=0,v0=0,s=1)
print "whRatio := ", whRatio(u0=0,v0=0,s=1)

    simplified equations, assuming u0=0, v0=0, s=1
    k2 :=  ((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)/((m2y
    - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x)
    k3 :=  ((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)/((m3y
    - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x)
    f  :=  sqrt(-((k3*m3y - m1y)*(k2*m2y - m1y) + (k3*m3x - m1x)*(k2*m2x
    - m1x))/((k3 - 1)*(k2 - 1)))
    whRatio :=  sqrt(((k2 - 1)^2 + (k2*m2y - m1y)^2/f^2 + (k2*m2x -
    m1x)^2/f^2)/((k3 - 1)^2 + (k3*m3y - m1y)^2/f^2 + (k3*m3x -
    m1x)^2/f^2))

print "Everything in one equation:"
print "whRatio := ", whRatio(f=f_)(k2=k2_,k3=k3_)(u0=0,v0=0,s=1)

    Everything in one equation:
    whRatio :=  sqrt(((((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y -
    m1y*m4x)/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) -
    1)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)/((m2y -
    m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - 1)*(((m1y -
    m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)*m2y/((m2y - m4y)*m3x
    - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - m1y)^2/((((m1y - m4y)*m2x -
    (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3y/((m3y - m4y)*m2x - (m3x -
    m4x)*m2y + m3x*m4y - m3y*m4x) - m1y)*(((m1y - m4y)*m3x - (m1x -
    m4x)*m3y + m1x*m4y - m1y*m4x)*m2y/((m2y - m4y)*m3x - (m2x - m4x)*m3y
    + m2x*m4y - m2y*m4x) - m1y) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y +
    m1x*m4y - m1y*m4x)*m3x/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y
    - m3y*m4x) - m1x)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y -
    m1y*m4x)*m2x/((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x)
    - m1x)) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y -
    m1y*m4x)/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) -
    1)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)/((m2y -
    m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - 1)*(((m1y -
    m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)*m2x/((m2y - m4y)*m3x
    - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - m1x)^2/((((m1y - m4y)*m2x -
    (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3y/((m3y - m4y)*m2x - (m3x -
    m4x)*m2y + m3x*m4y - m3y*m4x) - m1y)*(((m1y - m4y)*m3x - (m1x -
    m4x)*m3y + m1x*m4y - m1y*m4x)*m2y/((m2y - m4y)*m3x - (m2x - m4x)*m3y
    + m2x*m4y - m2y*m4x) - m1y) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y +
    m1x*m4y - m1y*m4x)*m3x/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y
    - m3y*m4x) - m1x)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y -
    m1y*m4x)*m2x/((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x)
    - m1x)) - (((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y -
    m1y*m4x)/((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) -
    1)^2)/((((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y -
    m1y*m4x)/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) -
    1)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)/((m2y -
    m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - 1)*(((m1y -
    m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3y/((m3y - m4y)*m2x
    - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) - m1y)^2/((((m1y - m4y)*m2x -
    (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3y/((m3y - m4y)*m2x - (m3x -
    m4x)*m2y + m3x*m4y - m3y*m4x) - m1y)*(((m1y - m4y)*m3x - (m1x -
    m4x)*m3y + m1x*m4y - m1y*m4x)*m2y/((m2y - m4y)*m3x - (m2x - m4x)*m3y
    + m2x*m4y - m2y*m4x) - m1y) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y +
    m1x*m4y - m1y*m4x)*m3x/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y
    - m3y*m4x) - m1x)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y -
    m1y*m4x)*m2x/((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x)
    - m1x)) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y -
    m1y*m4x)/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) -
    1)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y - m1y*m4x)/((m2y -
    m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x) - 1)*(((m1y -
    m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3x/((m3y - m4y)*m2x
    - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) - m1x)^2/((((m1y - m4y)*m2x -
    (m1x - m4x)*m2y + m1x*m4y - m1y*m4x)*m3y/((m3y - m4y)*m2x - (m3x -
    m4x)*m2y + m3x*m4y - m3y*m4x) - m1y)*(((m1y - m4y)*m3x - (m1x -
    m4x)*m3y + m1x*m4y - m1y*m4x)*m2y/((m2y - m4y)*m3x - (m2x - m4x)*m3y
    + m2x*m4y - m2y*m4x) - m1y) + (((m1y - m4y)*m2x - (m1x - m4x)*m2y +
    m1x*m4y - m1y*m4x)*m3x/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y
    - m3y*m4x) - m1x)*(((m1y - m4y)*m3x - (m1x - m4x)*m3y + m1x*m4y -
    m1y*m4x)*m2x/((m2y - m4y)*m3x - (m2x - m4x)*m3y + m2x*m4y - m2y*m4x)
    - m1x)) - (((m1y - m4y)*m2x - (m1x - m4x)*m2y + m1x*m4y -
    m1y*m4x)/((m3y - m4y)*m2x - (m3x - m4x)*m2y + m3x*m4y - m3y*m4x) -
    1)^2))


# some testing:
# - choose a random rectangle, 
# - project it onto a random plane,
# - insert the corners in the above equations,
# - check if the aspect ratio is correct.

from sage.plot.plot3d.transform import rotate_arbitrary

#redundandly random rotation matrix
rand_rotMatrix = \
           rotate_arbitrary((uniform(-5,5),uniform(-5,5),uniform(-5,5)),uniform(-5,5)) *\
           rotate_arbitrary((uniform(-5,5),uniform(-5,5),uniform(-5,5)),uniform(-5,5)) *\
           rotate_arbitrary((uniform(-5,5),uniform(-5,5),uniform(-5,5)),uniform(-5,5))

#random translation vector
rand_transVector = vector((uniform(-10,10),uniform(-10,10),uniform(-10,10))).transpose()

#random rectangle parameters
rand_width =uniform(0.1,10)
rand_height=uniform(0.1,10)
rand_left  =uniform(-10,10)
rand_top   =uniform(-10,10)

#random focal length and principal point
rand_f  = uniform(0.1,100)
rand_u0 = uniform(-100,100)
rand_v0 = uniform(-100,100)

# homogenous standard pinhole projection, see [zhang-single] Equation 1
hom_projection = A * rand_rotMatrix.augment(rand_transVector)

# construct a random rectangle in the plane z=0, then project it randomly 
rand_m1hom = hom_projection*vector((rand_left           ,rand_top            ,0,1)).transpose()
rand_m2hom = hom_projection*vector((rand_left           ,rand_top+rand_height,0,1)).transpose()
rand_m3hom = hom_projection*vector((rand_left+rand_width,rand_top            ,0,1)).transpose()
rand_m4hom = hom_projection*vector((rand_left+rand_width,rand_top+rand_height,0,1)).transpose()

#change type from 1x3 matrix to vector
rand_m1hom = rand_m1hom.column(0)
rand_m2hom = rand_m2hom.column(0)
rand_m3hom = rand_m3hom.column(0)
rand_m4hom = rand_m4hom.column(0)

#normalize
rand_m1hom = rand_m1hom/rand_m1hom[2]
rand_m2hom = rand_m2hom/rand_m2hom[2]
rand_m3hom = rand_m3hom/rand_m3hom[2]
rand_m4hom = rand_m4hom/rand_m4hom[2]

#substitute random values for f, u0, v0
rand_m1hom = rand_m1hom(f=rand_f,s=1,u0=rand_u0,v0=rand_v0)
rand_m2hom = rand_m2hom(f=rand_f,s=1,u0=rand_u0,v0=rand_v0)
rand_m3hom = rand_m3hom(f=rand_f,s=1,u0=rand_u0,v0=rand_v0)
rand_m4hom = rand_m4hom(f=rand_f,s=1,u0=rand_u0,v0=rand_v0)

# printing the randomly choosen values
print "ground truth: f=", rand_f, "; ratio=", rand_width/rand_height

# substitute all the variables in the equations:
print "calculated: f= ",\
f_(k2=k2_,k3=k3_)(s=1,u0=rand_u0,v0=rand_v0)(
  m1x=rand_m1hom[0],m1y=rand_m1hom[1],
  m2x=rand_m2hom[0],m2y=rand_m2hom[1],
  m3x=rand_m3hom[0],m3y=rand_m3hom[1],
  m4x=rand_m4hom[0],m4y=rand_m4hom[1],
),"; 1/ratio=", \
1/whRatio(f=f_)(k2=k2_,k3=k3_)(s=1,u0=rand_u0,v0=rand_v0)(
  m1x=rand_m1hom[0],m1y=rand_m1hom[1],
  m2x=rand_m2hom[0],m2y=rand_m2hom[1],
  m3x=rand_m3hom[0],m3y=rand_m3hom[1],
  m4x=rand_m4hom[0],m4y=rand_m4hom[1],
)

print "k2 = ", k2_(
  m1x=rand_m1hom[0],m1y=rand_m1hom[1],
  m2x=rand_m2hom[0],m2y=rand_m2hom[1],
  m3x=rand_m3hom[0],m3y=rand_m3hom[1],
  m4x=rand_m4hom[0],m4y=rand_m4hom[1],
), "; k3 = ", k3_(
  m1x=rand_m1hom[0],m1y=rand_m1hom[1],
  m2x=rand_m2hom[0],m2y=rand_m2hom[1],
  m3x=rand_m3hom[0],m3y=rand_m3hom[1],
  m4x=rand_m4hom[0],m4y=rand_m4hom[1],
)

# ATTENTION: testing revealed, that the whRatio 
# is actually the height/width ratio, 
# not the width/height ratio
# This contradicts [zhang-single]
# if anyone can find the error that caused this, I'd be grateful

    ground truth: f= 72.1045134124554 ; ratio= 3.46538779959142
    calculated: f=  72.1045134125 ; 1/ratio= 3.46538779959
    k2 =  0.99114614987 ; k3 =  1.57376280159



回答2:


Update

After reading your update, and looking at the first reference (Whiteboard scanning and image enhancement), I see where the missing point is.

The input data of the problem is a quadruple (A,B,C,D), AND the center O of the projected image. In the article, it corresponds to the assumption u0=v0=0. Adding this point, the problem becomes constrained enough to get the aspect ratio of the rectangle.

The problem is then restated as follows: Given a quadruple (A,B,C,D) in the Z=0 plane, find the eye position E(0,0,h), h>0 and a 3D plane P such that the projection of (A,B,C,D) on P is a rectangle.

Note that P is determined by E: to get a parallelogram, P must contain parallels to (EU) and (EV), where U=(AB)x(CD) and V=(AD)x(BC).

Experimentally, it seems that this problem has in general one unique solution, corresponding to a unique value of the w/h ratio of the rectangle.

Previous Post

No, you can't determine the rectangle ratio from the projection.

In the general case, a quadruple (A,B,C,D) of four non collinear points of the Z=0 plane is the projection of infinitely many rectangles, with infinitely many width/height ratios.

Consider the two vanishing points U, intersection of (AB) and (CD) and V, intersection of (AD) and (BC), and the point I, intersection of the two diagonals (AC) and (BD). To project as ABCD, a parallelogram of center I must lie on a plane containing the line parallel to (UV) through point I. On one such plane, you can find many rectangles projecting to ABCD, all with a different w/h ratio.

See these two images done with Cabri 3D. In the two cases ABCD is unchanged (on the gray Z=0 plane), and the blue plane containing the rectangle is not changed either. The green line partially hidden is the (UV) line and the visible green line is parallel to it and contains I.




回答3:


Size isnt really needed, and neither are proportions. And knowing which side is up is kind of irrelevant considering he's using photos/scans of documents. I doubt hes going to scan the back sides of them.

"Corner intersection" is the method to correct perspective. This might be of help:

How to draw a Perspective-Correct Grid in 2D




回答4:


On the question of why the results give h/w rather then w/h: I'm wondering if the expression of Equation 20 above is correct. Posted is:

       whRatio = sqrt (
            (n2*A.transpose()^(-1) * A^(-1)*n2.transpose()) / 
            (n3*A.transpose()^(-1) * A^(-1)*n3.transpose())
           ) 

When I try to execute that with OpenCV, I get an exception. But everything works correctly when I use the following equation which to me looks more like Equation 20: But based on Equation 20, it looks like it should be:

        whRatio = sqrt (
            (n2.transpose()*A.transpose()^(-1) * A^(-1)*n2) /
            (n3.transpose()*A.transpose()^(-1) * A^(-1)*n3)
           )



回答5:


You can determine width / height by this answer Calculating rectangle 3D coordinate with coordinate its shadow?. Assume that your rectangle rotate on intersection diagonal point calculate it width and height. But when you change distance between the assumption shadow plane to the real shadow plane proportional of rectangle is the same with calculated width / height!




回答6:


it is impossible to know the width of this rectangle without knowing the distance of the 'camera'.

a small rectangle viewed from 5 centimeters distance looks the same as a huge rectangle as seen from meters away




回答7:


Draw a right isosceles triangle with those two vanishing points and a third point below the horizon (that is, on the same side of the horizon as the rectangle is). That third point will be our origin and the two lines to the vanishing points will be our axes. Call the distance from the origin to a vanishing point pi/2. Now extend the sides of the rectangle from the vanishing points to the axes, and mark where they intersect the axes. Pick an axis, measure the distances from the two marks to the origin, transform those distances: x->tan(x), and the difference will be the "true" length of that side. Do the same for the other axis. Take the ratio of those two lengths and you're done.




回答8:


You need more information, that transformed figure could come from any parallelogram given an arbitrary perspective.

So I guess you need to do some kind of calibration first.

Edit: for those who said that I was wrong, here goes the mathematical proof that there are infinite combinations of rectangles/cameras that yield to the same projection:

In order to simplify the problem (as we only need the ratio of the sides) let's assume that our rectangle is defined by the following points: R=[(0,0),(1,0),(1,r),(0,r)] (this simplification is the same as transforming any problem to an equivalent one in an affine space).

The transformed polygon is defined as: T=[(tx0,ty0),(tx1,ty1),(tx2,ty2),(tx3,ty3)]

There exists a transformation matrix M = [[m00,m01,m02],[m10,m11,m12],[m20,m21,m22]] that satisfies (Rxi,Ryi,1)*M=wi(txi,tyi,1)'

if we expand the equation above for the points,

for R_0 we get: m02-tx0*w0 = m12-ty0*w0 = m22-w0 = 0

for R_1 we get: m00-tx1*w1 = m10-ty1*w1 = m20+m22-w1 = 0

for R_2 we get: m00+r*m01-tx2*w2 = m10+r*m11-ty2*w2 = m20+r*m21+m22-w2 = 0

and for R_3 we get: m00+r*m01-tx3*w3 = m10+r*m11-ty3*w3 = m20 + r*m21 + m22 -w3 = 0

So far we have 12 equations, 14 unknown variables (9 from the matrix, 4 from the wi, and 1 for the ratio r) and the rest are known values (txi and tyi are given).

Even if the system weren't underspecified, some of the unknowns are multiplied among themselves (r and mi0 products) making the system non linear (you could transform it to a linear system assigning a new name to each product, but you'll end still with 13 unknowns, and 3 of them being expanded to infinite solutions).

If you can find any flaw in the reasoning or the maths, please let me know.




回答9:


Dropbox has an extensive article on their tech blog where they describe how they solved the problem for their scanner app.

https://blogs.dropbox.com/tech/2016/08/fast-document-rectification-and-enhancement/

Rectifying a Document

We assume that the input document is rectangular in the physical world, but if it is not exactly facing the camera, the resulting corners in the image will be a general convex quadrilateral. So to satisfy our first goal, we must undo the geometric transform applied by the capture process. This transformation depends on the viewpoint of the camera relative to the document (these are the so-called extrinsic parameters), in addition to things like the focal length of the camera (the intrinsic parameters). Here’s a diagram of the capture scenario:

In order to undo the geometric transform, we must first determine the said parameters. If we assume a nicely symmetric camera (no astigmatism, no skew, et cetera), the unknowns in this model are:

  • the 3D location of the camera relative to the document (3 degrees of freedom),
  • the 3D orientation of the camera relative to the document (3 degrees of freedom),
  • the dimensions of the document (2 degrees of freedom), and
  • the focal length of the camera (1 degree of freedom).

On the flip side, the x- and y-coordinates of the four detected document corners gives us effectively eight constraints. While there are seemingly more unknowns (9) than constraints (8), the unknowns are not entirely free variables—one could imagine scaling the document physically and placing it further from the camera, to obtain an identical photo. This relation places an additional constraint, so we have a fully constrained system to be solved. (The actual system of equations we solve involves a few other considerations; the relevant Wikipedia article gives a good summary: https://en.wikipedia.org/wiki/Camera_resectioning)

Once the parameters have been recovered, we can undo the geometric transform applied by the capture process to obtain a nice rectangular image. However, this is potentially a time-consuming process: one would look up, for each output pixel, the value of the corresponding input pixel in the source image. Of course, GPUs are specifically designed for tasks like this: rendering a texture in a virtual space. There exists a view transform—which happens to be the inverse of the camera transform we just solved for!—with which one can render the full input image and obtain the rectified document. (An easy way to see this is to note that once you have the full input image on the screen of your phone, you can tilt and translate the phone such that the projection of the document region on the screen appears rectilinear to you.)

Lastly, recall that there was an ambiguity with respect to scale: we can’t tell whether the document was a letter-sized paper (8.5” x 11”) or a poster board (17” x 22”), for instance. What should the dimensions of the output image be? To resolve this ambiguity, we count the number of pixels within the quadrilateral in the input image, and set the output resolution as to match this pixel count. The idea is that we don’t want to upsample or downsample the image too much.




回答10:


There seems to still be some confusion on this interesting problem. I want to give an easy-to-follow explanation for when the problem can and cannot be solved.

Constraints and Degrees of Freedom

Typically when we are faced with a problem like this the first thing to do is to assess the number of unknown Degrees of Freedom (DoFs) N, and the number of independent equations M that we have for constraining the unknown DoFs. It is impossible to solve the problem if N if exceeds M (meaning there are fewer constraints than unknowns). We can rule out all problems where this is the case as being unsolvable. If N does not exceed M then it may be possible to solve the problem with a unique solution, but this is not guaranteed (see the second to last paragraph for an example).

Let's use p1, p2, p3 and p4 to denote the positions of the 4 corners of the planar surface in world coordinates. Let's use R and t to be the 3D rotation and translation that transforms these to camera coordinates. Let's use K to denote the 3x3  camera intrinsic matrix. We will ignore lens distortion for now. The 2D position of the ith corner in the camera's image is given by qi=f(K(Rpi+t)) where f is the projection function f(x,y,z)=(x/z,y/z). Using this equation we know that each corner in the image gives us two equations (i.e. two constraints) on our unknowns: one from the x component of qi and one from the y component. So we have a total of 8 constraints to work with. The official name of these constraints are the reprojection constraints.

So what are our unknown DoFs? Certainly R and t are unknown, because we do not know the camera's pose in world coordinates. Therefore we have already 6 unknown DoFs: 3 for R (e.g. yaw, pitch and roll) and 3 for t.  Therefore there can be a maximal of two unknowns in the remaining terms (K, p1, p2, p3, p4). 

Different problems

We can construct different problems depending on which two terms in (K, p1, p2, p3, p4) we shall consider as unknown. At this point let's write out K in the usual form: K=(fx, 0, cx; 0, fy, cy; 0,0,1) where fx and fy are the focal length terms (fx/fy is normally called the image aspect ratio) and (cx,cy) is the principal point (the centre of projection in the image).

We could obtain one problem by having fx and fy as our two unknowns, and assume (cx, cy, p1, p2, p3, p4) are all known. Indeed this very problem is used and solved within OpenCV's camera calibration method, using images of a checkerboard planar target. This is used to get an initial estimate for fx and fy, by assuming that the principal point is at the image centre (which is a very reasonable assumption for most cameras).

Alternatively we can create a different problem by assuming fx=fy, which again is quite reasonable for many cameras, and assume this focal length (denoted as f) is the only unknown in K. Therefore we still have one unknowns left to play with (recall we can have a maximum of two unknowns). So let's use this by supposing we known the shape of the plane: as a rectangle (which was the original assumption in the question). Therefore we can define the corners as follows: p1=(0,0,0), p2=(0,w,0), p3=(h,0,0) and p4=(h,w,0), where h and w denotes the height and width of the rectangle. Now, because we only have 1 unknown left, let us set this as the plane's aspect ratio: x=w/h. Now the question is can we simultaneously recover x, f, R and t from the 8 reprojection constraints? The answer it turns out is yes! And the solution is given in Zhang's paper cited in the question.

The scale ambiguity

One might wonder if another problem can be solved: if we assume K is known and the 2 unknowns are h and w. Can they be solved from the reprojection equations? The answer is no, and is because there is an ambiguity between the size of the plane and the plane's depth to the camera. Specifically if we scale the corners pi by s and scale t by s, then s cancels in the reprojection equations. Therefore the absolute scale of the plane is not recoverable.

There may be other problems with different combinations for the unknown DoFs, for example having R, t, one of the principal point components and the plane"s width as unknowns. However one needs to think of which cases are of practical use. Nevertheless I haven't yet seen a systematic set of solutions for all useful combinations!

More points

We might think that if we were to add extra point correspondences between the plane and the image, or exploit the edges of the plane, we could recover more than 8 unknown DoFs. Sadly the answer is no. This is because they don't add any extra independent constraints. The reason is because the 4 corners describe completely the transform from the plane to the image. This can be seen by fitting a homography matrix using the four corners, which can then determine the positions of all other points on the plane in the image.



来源:https://stackoverflow.com/questions/1194352/proportions-of-a-perspective-deformed-rectangle

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!