Localization of a robot using Kinect and EMGU(OpenCV wrapper)

拜拜、爱过 提交于 2019-12-01 12:37:39

Robot localization is a very tricky problem and I myself have been struggling for months now, I can tell you what I have achieved But you have a number of options:

  • Optical Flow Based Odometery: (Also known as visual odometry):
    1. Extract keypoints from one image or features (I used Shi-Tomashi, or cvGoodFeaturesToTrack)
    2. Do the same for a consecutive image
    3. Match these features (I used Lucas-Kanade)
    4. Extract depth information from Kinect
    5. Calculate transformation between two 3D point clouds.

What the above algorithm is doing is it is trying to estimate the camera motion between two frames, which will tell you the position of the robot.

  • Monte Carlo Localization: This is rather simpler, but you should also use wheel odometery with it. Check this paper out for a c# based approach.

The method above uses probabalistic models to determine the robot's location.

The sad part is even though libraries exist in C++ to do what you need very easily, wrapping them for C# is a herculean task. If you however can code a wrapper, then 90% of your work is done, the key libraries to use are PCL and MRPT.

The last option (Which by far is the easiest, but the most inaccurate) is to use KinectFusion built in to the Kinect SDK 1.7. But my experiences with it for robot localization have been very bad.

You must read Slam for Dummies, it will make things about Monte Carlo Localization very clear.

The hard reality is, that this is very tricky and you will most probably end up doing it yourself. I hope you dive into this vast topic, and would learn awesome stuff.

For further information, or wrappers that I have written. Just comment below... :-)

Best

Not sure if is would help you or not...but I put together a Python module that might help.

http://letsmakerobots.com/node/38883#comments

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!