There has been previous questions (here, here and here) related to my question, however my question has a different aspect to it, which I have not seen in any of the previou
This is for those of you who are experiencing the same problem. I thought it might help to share what I have found out.
As notified by Bill, camera calibration is the best solution to this problem.
However I found out that, using homographies and epipolar lines both the images can be aligned. This requires atleast 8 matching features in both images. This is a difficult problem when dealing with depth images.
There have been several attempts to calibrate these images which can be found here and here both require a calibration pattern to calibrate. What I was trying to achieve was to align already captured depth and rgb images, which can be done given I calibrate parameters from the same kinect sensor which I used to record.
I have found that the best way to get around this problem is to align both the images using built in library function in OpenNI and Kinect SDK.