Real time 6D pose estimation of known 3D CAD objects from a single 2D image or point clouds from RGBD Camera when objects are one on top of the other?

可紊 提交于 2021-01-01 07:43:47

问题


I'm working on a research project where I need to estimate in real time the 6DOF pose of objects in a pick and place tasks. The pose must be estimate in real time, and the objects can be one on top of the other and identical, so I have to get the position and the orientation of the object on the top. The problem is that the objects are same ( PPVC blocks, in the construction field) but good thing is that they are with quite regular shape.

So how to compare the known 3D CAD model in a single 2D image of point clouds from RGBD Camera with the object that is on the top, so can get the 6DOF pose in real time. Any approach can be used, but must be very accurate as very high precision is required.

I know that can use Deep Learning to localize the object and then either compare with the 3D CAD model or process the point clouds(segmentation , if need clustering, PCA...) , but what if the Deep Learning failed and not able to detect the object with the RGBD Camera? What options do I have then? Is the surface matching algorithm https://docs.opencv.org/3.0-beta/modules/surface_matching/doc/surface_matching.html in OpenCv an option?

I start with the PCL and comparing the 3D CAD object model with the PCL scene where the actual object is using ICP matching How to use iterative closest point

My sample code is here

void cloud_cb(const sensor_msgs::PointCloud2::ConstPtr &msg){


   // Convert to pcl point cloud
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudIn (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::fromROSMsg(*msg,*cloudIn);

   //Deklaration cloud pointers
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut (new pcl::PointCloud<pcl::PointXYZRGB>);
   pcl::PointCloud<pcl::PointXYZRGB> finalCloud;
   pcl::PointCloud<pcl::PointXYZRGB>::Ptr cloudOut_new (new pcl::PointCloud<pcl::PointXYZRGB>) ;


   //Load CAD file
   if(pcl::io::loadPLYFile ("/home/admini/darknet_ros/src/darknet_ros/gb_visual_detection_3d/darknet_ros_3d/darknet_ros_3d/src/rs_box.ply", *cloudOut)==-1)
   {
     PCL_ERROR ("Couldn't read second input file! \n");
    // return (-1);
   }


  //std::cout << "\nLoaded file "  << " (" << cloudOut->size () << " points) in " << time.toc () << " ms\n" << std::endl;
  
  pcl::IterativeClosestPoint<pcl::PointXYZRGB, pcl::PointXYZRGB> icp;
  icp.setInputCloud(cloudOut);
  icp.setInputTarget(cloudIn);
  icp.setMaximumIterations (30);// 30
  icp.setTransformationEpsilon (1e-9);
  icp.setMaxCorrespondenceDistance (2);//2
  icp.setEuclideanFitnessEpsilon (0.6);// 0.0001
  icp.setRANSACOutlierRejectionThreshold (80);//20
  //icp.setRANSACIterations(800);//800

  icp.align(finalCloud);

  if (icp.hasConverged())
  {
      std::cout << "ICP converged." << std::endl
                << "The score is " << icp.getFitnessScore() << std::endl;
      std::cout << "Transformation matrix:" << std::endl;
      std::cout << icp.getFinalTransformation() << std::endl;
  }
  else std::cout << "ICP did not converge." << std::endl;

  Eigen::Matrix4f transformationMatrix = icp.getFinalTransformation ();
  std::cout<<"trans %n"<<transformationMatrix<<std::endl;

  //pcl::transformPointCloud( *cloudOut, *cloudOut_new, transformationMatrix);
  pcl::transformPointCloud (finalCloud, *cloudOut_new, transformationMatrix);

   //sensor_msgs::PointCloud2 cloud_icp;
   //pcl::toROSMsg (*cloudOut , cloud_icp);
   //cloudOut ->header.frame_id = "/zed_left_camera_frame";
   //pub.publish (cloud_icp);


   //Covert rotation matrix to quaternion
   //First convert Transforamtion Matrix to Rotation Matrix
    //Eigen::Matrix3f Rotation_matrix = transformationMatrix.topLeftCorner<3,3>();
    /*Eigen::Matrix3d rotation_matrix;
    rotation_matrix (0, 0) = transformationMatrix (0,0);
    rotation_matrix (0, 1) = transformationMatrix (0,1);
    rotation_matrix (0, 2) = transformationMatrix (0,2);
    rotation_matrix (1, 0) = transformationMatrix (1,0);
    rotation_matrix (1, 1) = transformationMatrix (1,1);
    rotation_matrix (1, 2) = transformationMatrix (1,2);
    rotation_matrix (2, 0) = transformationMatrix (2,0);
    rotation_matrix (2, 1) = transformationMatrix (2,1);
    rotation_matrix (2, 2) = transformationMatrix (2,2);*/

    //Convert Rotation Matrix to Quaternion
    //Quaternionf q(Rotation_matrix);
    Eigen::Matrix3f rot_matrix = transformationMatrix.topLeftCorner<3,3>();
    //Eigen::Quaternionf q;
    Eigen::Quaternionf q(rot_matrix);
    q.normalized();
    //Eigen::Quaternionf q.normalized(transformationMatrix.topLeftCorner<3,3>());

    cout << " postion x  = " << transformationMatrix(0,3) <<  endl;
    cout << " postion y  = " << transformationMatrix(1,3) <<  endl;
    cout << " postion z  = " << transformationMatrix(2,3) <<  endl;
    
   //Convert Quaternion to Euler angle
    double roll, pitch, yaw;
    double PI=3.1415926535;
    auto euler = q.toRotationMatrix().eulerAngles(0,1,2);
    std::cout << "Euler from quaternion in roll, pitch, yaw (degree)"<< std::endl << euler*180/PI << std::endl;

  


   //Visualisation Marker
    std::string ns; 
    float r; 
    float g; 
    float b;
    visualization_msgs::MarkerArray msg_marker;
    visualization_msgs::Marker bbx_marker;
    bbx_marker.header.frame_id = "zed_left_camera_frame";
    bbx_marker.header.stamp = ros::Time::now();
    bbx_marker.ns = ns;
    bbx_marker.type = visualization_msgs::Marker::CUBE;
    bbx_marker.action = visualization_msgs::Marker::ADD;
    bbx_marker.pose.position.x =  transformationMatrix(0,3);
    bbx_marker.pose.position.y =  transformationMatrix(1,3);;
    bbx_marker.pose.position.z =  transformationMatrix(2,3);;
    bbx_marker.pose.orientation.x = (90/PI)*q.x();///2
    bbx_marker.pose.orientation.y = -(360/PI)*q.y();//-2*
    bbx_marker.pose.orientation.z = -(360/PI)*q.z();//-2*
    bbx_marker.pose.orientation.w = q.w();
    bbx_marker.scale.x = 0.09;//0.13
    bbx_marker.scale.y = 0.22;//0.34
    bbx_marker.scale.z = 0.21;//0.25
    bbx_marker.color.b = 0.3*b;
    bbx_marker.color.g = 0.1*g;
    bbx_marker.color.r = 0.1*r;
    bbx_marker.color.a = 0.8;
    bbx_marker.lifetime = ros::Duration();
    msg_marker.markers.push_back(bbx_marker);
    markers_pub_.publish(msg_marker);


}

int
main (int argc, char** argv)
{
  // Initialize ROS
  ros::init (argc, argv, "my_pcl_tutorial");
  ros::NodeHandle nh;
  // Create a ROS subscriber for the input point cloud
  ros::Subscriber sub = nh.subscribe ("/zed/zed_node/point_cloud/cloud_registered", 200, cloud_cb);
  // Create a ROS publisher for the output point cloud
  pub = nh.advertise<sensor_msgs::PointCloud2> ("cloud_icp", 100);
  markers_pub_ = nh.advertise<visualization_msgs::MarkerArray> ("msg_marker", 200);
  ros::spin();

}

But there are two points in the code and algo that need further improvements. Firstly, can be more accurate. And secondly, the update rate needs to be increased, let's say the target is around 25-30fps. Any help on how to achieve these two further improvement points?

Thanks


回答1:


Essentially, there are two main steps:

  1. Segmentation: extract a list of objects from the scene
  2. Recognition: match the extracted objects against known objects from the database

There are several methods for both of these steps already in the PCL library.

Have a look at this tutorial for a starter, where both steps are included.

https://pcl.readthedocs.io/projects/tutorials/en/latest/correspondence_grouping.html#correspondence-grouping



来源:https://stackoverflow.com/questions/62187435/real-time-6d-pose-estimation-of-known-3d-cad-objects-from-a-single-2d-image-or-p

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!