I\'m interested, how is the dual input in a sensor fusioning setup in a Kalman filter modeled?
Say for instance that you have an accelerometer and a gyro and want to
Horizon line is G' * (u, v, f)=0 ,where G is a gravity vector, u and v image centred coordinates and f focal length. Now pros and cons of sensors: gyro is super fast and accurate but drifts, accelerometer is less accurate but (if calibrated) has zero bias and doesn't drift (given no acceleration except gravity). They measure different things - accelerometer measures acceleration and thus orientation relative to the gravity vector while gyro measures rotation speed and thus the change in orientation. To convert it to orientation one has to integrate its values (thankfully it can be sampled at high fps like 100-200). thus Kalman filter that supposed to be linear is not applicable to gyro. for now we can just simplify sensor fusion as a weighted sum of readings and predictions.
You can combine two readings - accelerometer and integrated gyro and model prediction using weights that are inversely proportional to data variances. You will also have to use compass occasionally since accelerometer doesn't tell you much about the azimuth but I guess it is irrelevant for calculation of a horizon line. The system should be responsive and accurate and for this purpose whenever orientation changes fast the weights for gyro should be large; when the system settles down and rotation stops the weights for accelerometer will go up allowing more integration of zero bias readings and killing the drift from gyro.